00:00:00.001 Started by upstream project "autotest-per-patch" build number 132345 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.102 Fetching changes from the remote Git repository 00:00:00.104 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.224 > git --version # 'git version 2.39.2' 00:00:00.224 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.040 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.053 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.067 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.067 > git config core.sparsecheckout # timeout=10 00:00:04.082 > git read-tree -mu HEAD # timeout=10 00:00:04.098 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.124 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.124 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.205 [Pipeline] Start of Pipeline 00:00:04.218 [Pipeline] library 00:00:04.220 Loading library shm_lib@master 00:00:04.220 Library shm_lib@master is cached. Copying from home. 00:00:04.238 [Pipeline] node 00:00:04.245 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.247 [Pipeline] { 00:00:04.255 [Pipeline] catchError 00:00:04.256 [Pipeline] { 00:00:04.266 [Pipeline] wrap 00:00:04.272 [Pipeline] { 00:00:04.278 [Pipeline] stage 00:00:04.279 [Pipeline] { (Prologue) 00:00:04.464 [Pipeline] sh 00:00:04.747 + logger -p user.info -t JENKINS-CI 00:00:04.762 [Pipeline] echo 00:00:04.763 Node: GP6 00:00:04.770 [Pipeline] sh 00:00:05.065 [Pipeline] setCustomBuildProperty 00:00:05.076 [Pipeline] echo 00:00:05.078 Cleanup processes 00:00:05.082 [Pipeline] sh 00:00:05.364 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.364 2300517 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.376 [Pipeline] sh 00:00:05.660 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.660 ++ grep -v 'sudo pgrep' 00:00:05.660 ++ awk '{print $1}' 00:00:05.660 + sudo kill -9 00:00:05.660 + true 00:00:05.673 [Pipeline] cleanWs 00:00:05.681 [WS-CLEANUP] Deleting project workspace... 00:00:05.681 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.688 [WS-CLEANUP] done 00:00:05.691 [Pipeline] setCustomBuildProperty 00:00:05.704 [Pipeline] sh 00:00:05.983 + sudo git config --global --replace-all safe.directory '*' 00:00:06.057 [Pipeline] httpRequest 00:00:06.613 [Pipeline] echo 00:00:06.615 Sorcerer 10.211.164.20 is alive 00:00:06.623 [Pipeline] retry 00:00:06.624 [Pipeline] { 00:00:06.637 [Pipeline] httpRequest 00:00:06.642 HttpMethod: GET 00:00:06.642 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.643 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.646 Response Code: HTTP/1.1 200 OK 00:00:06.646 Success: Status code 200 is in the accepted range: 200,404 00:00:06.646 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.543 [Pipeline] } 00:00:07.561 [Pipeline] // retry 00:00:07.567 [Pipeline] sh 00:00:07.854 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.872 [Pipeline] httpRequest 00:00:08.626 [Pipeline] echo 00:00:08.628 Sorcerer 10.211.164.20 is alive 00:00:08.639 [Pipeline] retry 00:00:08.641 [Pipeline] { 00:00:08.652 [Pipeline] httpRequest 00:00:08.656 HttpMethod: GET 00:00:08.656 URL: http://10.211.164.20/packages/spdk_5716007f505719c840e6ef64188fcf6d0799f40a.tar.gz 00:00:08.657 Sending request to url: http://10.211.164.20/packages/spdk_5716007f505719c840e6ef64188fcf6d0799f40a.tar.gz 00:00:08.668 Response Code: HTTP/1.1 200 OK 00:00:08.669 Success: Status code 200 is in the accepted range: 200,404 00:00:08.669 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5716007f505719c840e6ef64188fcf6d0799f40a.tar.gz 00:01:07.593 [Pipeline] } 00:01:07.609 [Pipeline] // retry 00:01:07.616 [Pipeline] sh 00:01:07.901 + tar --no-same-owner -xf spdk_5716007f505719c840e6ef64188fcf6d0799f40a.tar.gz 00:01:11.191 [Pipeline] sh 00:01:11.477 + git -C spdk log --oneline -n5 00:01:11.477 5716007f5 dif: Set DIF field to 0 explicitly if its check is disabled 00:01:11.477 af5bcb946 bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:01:11.477 12962b97e ut/bdev: Remove duplication with many stups among unit test files 00:01:11.477 8ccf9ce7b accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:01:11.477 ac2633210 accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:01:11.489 [Pipeline] } 00:01:11.505 [Pipeline] // stage 00:01:11.515 [Pipeline] stage 00:01:11.518 [Pipeline] { (Prepare) 00:01:11.537 [Pipeline] writeFile 00:01:11.556 [Pipeline] sh 00:01:11.840 + logger -p user.info -t JENKINS-CI 00:01:11.853 [Pipeline] sh 00:01:12.137 + logger -p user.info -t JENKINS-CI 00:01:12.149 [Pipeline] sh 00:01:12.433 + cat autorun-spdk.conf 00:01:12.433 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.433 SPDK_TEST_NVMF=1 00:01:12.433 SPDK_TEST_NVME_CLI=1 00:01:12.433 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.433 SPDK_TEST_NVMF_NICS=e810 00:01:12.433 SPDK_TEST_VFIOUSER=1 00:01:12.433 SPDK_RUN_UBSAN=1 00:01:12.433 NET_TYPE=phy 00:01:12.441 RUN_NIGHTLY=0 00:01:12.445 [Pipeline] readFile 00:01:12.467 [Pipeline] withEnv 00:01:12.469 [Pipeline] { 00:01:12.481 [Pipeline] sh 00:01:12.766 + set -ex 00:01:12.766 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.766 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.766 ++ SPDK_TEST_NVMF=1 00:01:12.766 ++ SPDK_TEST_NVME_CLI=1 00:01:12.766 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.766 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.766 ++ SPDK_TEST_VFIOUSER=1 00:01:12.766 ++ SPDK_RUN_UBSAN=1 00:01:12.766 ++ NET_TYPE=phy 00:01:12.766 ++ RUN_NIGHTLY=0 00:01:12.766 + case $SPDK_TEST_NVMF_NICS in 00:01:12.766 + DRIVERS=ice 00:01:12.766 + [[ tcp == \r\d\m\a ]] 00:01:12.766 + [[ -n ice ]] 00:01:12.766 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.766 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.766 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.766 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.766 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.766 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.766 + true 00:01:12.766 + for D in $DRIVERS 00:01:12.766 + sudo modprobe ice 00:01:12.766 + exit 0 00:01:12.775 [Pipeline] } 00:01:12.789 [Pipeline] // withEnv 00:01:12.794 [Pipeline] } 00:01:12.809 [Pipeline] // stage 00:01:12.818 [Pipeline] catchError 00:01:12.819 [Pipeline] { 00:01:12.833 [Pipeline] timeout 00:01:12.834 Timeout set to expire in 1 hr 0 min 00:01:12.835 [Pipeline] { 00:01:12.849 [Pipeline] stage 00:01:12.851 [Pipeline] { (Tests) 00:01:12.864 [Pipeline] sh 00:01:13.151 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.152 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.152 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.152 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.152 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.152 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.152 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.152 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.152 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.152 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.152 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.152 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.152 + source /etc/os-release 00:01:13.152 ++ NAME='Fedora Linux' 00:01:13.152 ++ VERSION='39 (Cloud Edition)' 00:01:13.152 ++ ID=fedora 00:01:13.152 ++ VERSION_ID=39 00:01:13.152 ++ VERSION_CODENAME= 00:01:13.152 ++ PLATFORM_ID=platform:f39 00:01:13.152 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:13.152 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.152 ++ LOGO=fedora-logo-icon 00:01:13.152 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:13.152 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.152 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:13.152 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.152 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.152 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.152 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:13.152 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.152 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:13.152 ++ SUPPORT_END=2024-11-12 00:01:13.152 ++ VARIANT='Cloud Edition' 00:01:13.152 ++ VARIANT_ID=cloud 00:01:13.152 + uname -a 00:01:13.152 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:13.152 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.091 Hugepages 00:01:14.091 node hugesize free / total 00:01:14.091 node0 1048576kB 0 / 0 00:01:14.091 node0 2048kB 0 / 0 00:01:14.091 node1 1048576kB 0 / 0 00:01:14.091 node1 2048kB 0 / 0 00:01:14.091 00:01:14.091 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.091 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:14.091 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:14.091 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:14.091 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:14.091 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:14.091 + rm -f /tmp/spdk-ld-path 00:01:14.091 + source autorun-spdk.conf 00:01:14.091 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.091 ++ SPDK_TEST_NVMF=1 00:01:14.091 ++ SPDK_TEST_NVME_CLI=1 00:01:14.091 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.091 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.091 ++ SPDK_TEST_VFIOUSER=1 00:01:14.091 ++ SPDK_RUN_UBSAN=1 00:01:14.091 ++ NET_TYPE=phy 00:01:14.091 ++ RUN_NIGHTLY=0 00:01:14.091 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.091 + [[ -n '' ]] 00:01:14.091 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.349 + for M in /var/spdk/build-*-manifest.txt 00:01:14.349 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:14.349 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.349 + for M in /var/spdk/build-*-manifest.txt 00:01:14.349 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.349 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.349 + for M in /var/spdk/build-*-manifest.txt 00:01:14.349 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.349 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.349 ++ uname 00:01:14.349 + [[ Linux == \L\i\n\u\x ]] 00:01:14.349 + sudo dmesg -T 00:01:14.349 + sudo dmesg --clear 00:01:14.349 + dmesg_pid=2301821 00:01:14.349 + [[ Fedora Linux == FreeBSD ]] 00:01:14.349 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.349 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.349 + sudo dmesg -Tw 00:01:14.349 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.349 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.349 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.349 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.349 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.349 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.349 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.349 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.349 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.349 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.349 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.349 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.349 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.349 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.349 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.349 07:03:17 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:14.349 07:03:17 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:14.349 07:03:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:14.349 07:03:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:14.349 07:03:17 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.349 07:03:17 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:14.349 07:03:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:14.349 07:03:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:14.349 07:03:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.349 07:03:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.349 07:03:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.349 07:03:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.349 07:03:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.350 07:03:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.350 07:03:17 -- paths/export.sh@5 -- $ export PATH 00:01:14.350 07:03:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.350 07:03:17 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:14.350 07:03:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:14.350 07:03:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732082597.XXXXXX 00:01:14.350 07:03:17 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732082597.ccTTy1 00:01:14.350 07:03:17 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:14.350 07:03:17 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:14.350 07:03:17 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:14.350 07:03:17 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.350 07:03:17 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.350 07:03:17 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:14.350 07:03:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:14.350 07:03:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.350 07:03:17 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:14.350 07:03:17 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:14.350 07:03:17 -- pm/common@17 -- $ local monitor 00:01:14.350 07:03:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.350 07:03:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.350 07:03:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.350 07:03:17 -- pm/common@21 -- $ date +%s 00:01:14.350 07:03:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.350 07:03:17 -- pm/common@21 -- $ date +%s 00:01:14.350 07:03:17 -- pm/common@25 -- $ sleep 1 00:01:14.350 07:03:17 -- pm/common@21 -- $ date +%s 00:01:14.350 07:03:17 -- pm/common@21 -- $ date +%s 00:01:14.350 07:03:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082597 00:01:14.350 07:03:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082597 00:01:14.350 07:03:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082597 00:01:14.350 07:03:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082597 00:01:14.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082597_collect-cpu-load.pm.log 00:01:14.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082597_collect-vmstat.pm.log 00:01:14.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082597_collect-cpu-temp.pm.log 00:01:14.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082597_collect-bmc-pm.bmc.pm.log 00:01:15.287 07:03:18 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:15.287 07:03:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.287 07:03:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.287 07:03:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.287 07:03:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.287 Wed Nov 20 06:03:18 AM UTC 2024 00:01:15.287 07:03:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.546 v25.01-pre-201-g5716007f5 00:01:15.546 07:03:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.546 07:03:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.546 07:03:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.546 07:03:18 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:15.546 07:03:18 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:15.546 07:03:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.546 ************************************ 00:01:15.546 START TEST ubsan 00:01:15.546 ************************************ 00:01:15.546 07:03:18 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:15.546 using ubsan 00:01:15.546 00:01:15.546 real 0m0.000s 00:01:15.546 user 0m0.000s 00:01:15.546 sys 0m0.000s 00:01:15.546 07:03:18 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:15.546 07:03:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.546 ************************************ 00:01:15.546 END TEST ubsan 00:01:15.546 ************************************ 00:01:15.546 07:03:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.546 07:03:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.546 07:03:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.546 07:03:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.546 07:03:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.546 07:03:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.546 07:03:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.546 07:03:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.546 07:03:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:15.546 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.546 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.805 Using 'verbs' RDMA provider 00:01:26.350 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.326 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.582 Creating mk/config.mk...done. 00:01:36.582 Creating mk/cc.flags.mk...done. 00:01:36.582 Type 'make' to build. 00:01:36.582 07:03:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:36.582 07:03:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:36.582 07:03:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:36.582 07:03:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.582 ************************************ 00:01:36.582 START TEST make 00:01:36.582 ************************************ 00:01:36.582 07:03:39 make -- common/autotest_common.sh@1127 -- $ make -j48 00:01:36.842 make[1]: Nothing to be done for 'all'. 00:01:38.755 The Meson build system 00:01:38.755 Version: 1.5.0 00:01:38.755 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:38.755 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.755 Build type: native build 00:01:38.755 Project name: libvfio-user 00:01:38.755 Project version: 0.0.1 00:01:38.755 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:38.755 C linker for the host machine: cc ld.bfd 2.40-14 00:01:38.755 Host machine cpu family: x86_64 00:01:38.755 Host machine cpu: x86_64 00:01:38.755 Run-time dependency threads found: YES 00:01:38.755 Library dl found: YES 00:01:38.755 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.755 Run-time dependency json-c found: YES 0.17 00:01:38.755 Run-time dependency cmocka found: YES 1.1.7 00:01:38.755 Program pytest-3 found: NO 00:01:38.755 Program flake8 found: NO 00:01:38.755 Program misspell-fixer found: NO 00:01:38.755 Program restructuredtext-lint found: NO 00:01:38.755 Program valgrind found: YES (/usr/bin/valgrind) 00:01:38.755 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.755 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.755 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.755 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:38.755 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:38.755 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:38.755 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:38.755 Build targets in project: 8 00:01:38.755 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:38.755 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:38.755 00:01:38.755 libvfio-user 0.0.1 00:01:38.755 00:01:38.755 User defined options 00:01:38.755 buildtype : debug 00:01:38.755 default_library: shared 00:01:38.755 libdir : /usr/local/lib 00:01:38.755 00:01:38.755 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.335 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:39.603 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:39.603 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:39.603 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:39.603 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:39.603 [5/37] Compiling C object samples/null.p/null.c.o 00:01:39.603 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:39.603 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:39.603 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:39.603 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:39.603 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:39.603 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:39.603 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:39.603 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:39.603 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:39.603 [15/37] Compiling C object samples/server.p/server.c.o 00:01:39.603 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:39.603 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:39.603 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:39.862 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:39.862 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:39.862 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:39.862 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:39.862 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:39.862 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:39.862 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:39.862 [26/37] Compiling C object samples/client.p/client.c.o 00:01:39.862 [27/37] Linking target samples/client 00:01:39.862 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:39.862 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:39.862 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.128 [31/37] Linking target test/unit_tests 00:01:40.128 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:40.128 [33/37] Linking target samples/server 00:01:40.128 [34/37] Linking target samples/lspci 00:01:40.128 [35/37] Linking target samples/null 00:01:40.128 [36/37] Linking target samples/gpio-pci-idio-16 00:01:40.128 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:40.128 INFO: autodetecting backend as ninja 00:01:40.128 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.394 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.325 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.325 ninja: no work to do. 00:01:45.558 The Meson build system 00:01:45.558 Version: 1.5.0 00:01:45.558 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.558 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.558 Build type: native build 00:01:45.558 Program cat found: YES (/usr/bin/cat) 00:01:45.558 Project name: DPDK 00:01:45.558 Project version: 24.03.0 00:01:45.558 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.558 C linker for the host machine: cc ld.bfd 2.40-14 00:01:45.558 Host machine cpu family: x86_64 00:01:45.558 Host machine cpu: x86_64 00:01:45.558 Message: ## Building in Developer Mode ## 00:01:45.558 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.558 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.558 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.558 Program python3 found: YES (/usr/bin/python3) 00:01:45.558 Program cat found: YES (/usr/bin/cat) 00:01:45.558 Compiler for C supports arguments -march=native: YES 00:01:45.559 Checking for size of "void *" : 8 00:01:45.559 Checking for size of "void *" : 8 (cached) 00:01:45.559 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:45.559 Library m found: YES 00:01:45.559 Library numa found: YES 00:01:45.559 Has header "numaif.h" : YES 00:01:45.559 Library fdt found: NO 00:01:45.559 Library execinfo found: NO 00:01:45.559 Has header "execinfo.h" : YES 00:01:45.559 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.559 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.559 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.559 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.559 Run-time dependency openssl found: YES 3.1.1 00:01:45.559 Run-time dependency libpcap found: YES 1.10.4 00:01:45.559 Has header "pcap.h" with dependency libpcap: YES 00:01:45.559 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.559 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.559 Compiler for C supports arguments -Wformat: YES 00:01:45.559 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.559 Compiler for C supports arguments -Wformat-security: NO 00:01:45.559 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.559 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.559 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.559 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.559 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.559 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.559 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.559 Compiler for C supports arguments -Wundef: YES 00:01:45.559 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.559 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.559 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.559 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.559 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.559 Program objdump found: YES (/usr/bin/objdump) 00:01:45.559 Compiler for C supports arguments -mavx512f: YES 00:01:45.559 Checking if "AVX512 checking" compiles: YES 00:01:45.559 Fetching value of define "__SSE4_2__" : 1 00:01:45.559 Fetching value of define "__AES__" : 1 00:01:45.559 Fetching value of define "__AVX__" : 1 00:01:45.559 Fetching value of define "__AVX2__" : (undefined) 00:01:45.559 Fetching value of define "__AVX512BW__" : (undefined) 00:01:45.559 Fetching value of define "__AVX512CD__" : (undefined) 00:01:45.559 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:45.559 Fetching value of define "__AVX512F__" : (undefined) 00:01:45.559 Fetching value of define "__AVX512VL__" : (undefined) 00:01:45.559 Fetching value of define "__PCLMUL__" : 1 00:01:45.559 Fetching value of define "__RDRND__" : 1 00:01:45.559 Fetching value of define "__RDSEED__" : (undefined) 00:01:45.559 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:45.559 Fetching value of define "__znver1__" : (undefined) 00:01:45.559 Fetching value of define "__znver2__" : (undefined) 00:01:45.559 Fetching value of define "__znver3__" : (undefined) 00:01:45.559 Fetching value of define "__znver4__" : (undefined) 00:01:45.559 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.559 Message: lib/log: Defining dependency "log" 00:01:45.559 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.559 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.559 Checking for function "getentropy" : NO 00:01:45.559 Message: lib/eal: Defining dependency "eal" 00:01:45.559 Message: lib/ring: Defining dependency "ring" 00:01:45.559 Message: lib/rcu: Defining dependency "rcu" 00:01:45.559 Message: lib/mempool: Defining dependency "mempool" 00:01:45.559 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.559 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.559 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:45.559 Compiler for C supports arguments -mpclmul: YES 00:01:45.559 Compiler for C supports arguments -maes: YES 00:01:45.559 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.559 Compiler for C supports arguments -mavx512bw: YES 00:01:45.559 Compiler for C supports arguments -mavx512dq: YES 00:01:45.559 Compiler for C supports arguments -mavx512vl: YES 00:01:45.559 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.559 Compiler for C supports arguments -mavx2: YES 00:01:45.559 Compiler for C supports arguments -mavx: YES 00:01:45.559 Message: lib/net: Defining dependency "net" 00:01:45.559 Message: lib/meter: Defining dependency "meter" 00:01:45.559 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.559 Message: lib/pci: Defining dependency "pci" 00:01:45.559 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.559 Message: lib/hash: Defining dependency "hash" 00:01:45.559 Message: lib/timer: Defining dependency "timer" 00:01:45.559 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.559 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.559 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.559 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.559 Message: lib/power: Defining dependency "power" 00:01:45.559 Message: lib/reorder: Defining dependency "reorder" 00:01:45.559 Message: lib/security: Defining dependency "security" 00:01:45.559 Has header "linux/userfaultfd.h" : YES 00:01:45.559 Has header "linux/vduse.h" : YES 00:01:45.559 Message: lib/vhost: Defining dependency "vhost" 00:01:45.559 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.559 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.559 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.559 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.559 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.559 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.559 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.559 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.559 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.559 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.559 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:45.559 Configuring doxy-api-html.conf using configuration 00:01:45.559 Configuring doxy-api-man.conf using configuration 00:01:45.559 Program mandb found: YES (/usr/bin/mandb) 00:01:45.559 Program sphinx-build found: NO 00:01:45.559 Configuring rte_build_config.h using configuration 00:01:45.559 Message: 00:01:45.559 ================= 00:01:45.559 Applications Enabled 00:01:45.559 ================= 00:01:45.559 00:01:45.559 apps: 00:01:45.559 00:01:45.559 00:01:45.559 Message: 00:01:45.559 ================= 00:01:45.559 Libraries Enabled 00:01:45.559 ================= 00:01:45.559 00:01:45.559 libs: 00:01:45.559 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.559 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.559 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.559 00:01:45.559 Message: 00:01:45.559 =============== 00:01:45.559 Drivers Enabled 00:01:45.559 =============== 00:01:45.559 00:01:45.559 common: 00:01:45.559 00:01:45.559 bus: 00:01:45.559 pci, vdev, 00:01:45.559 mempool: 00:01:45.559 ring, 00:01:45.559 dma: 00:01:45.559 00:01:45.559 net: 00:01:45.559 00:01:45.559 crypto: 00:01:45.559 00:01:45.559 compress: 00:01:45.559 00:01:45.559 vdpa: 00:01:45.559 00:01:45.559 00:01:45.559 Message: 00:01:45.559 ================= 00:01:45.559 Content Skipped 00:01:45.559 ================= 00:01:45.559 00:01:45.559 apps: 00:01:45.559 dumpcap: explicitly disabled via build config 00:01:45.559 graph: explicitly disabled via build config 00:01:45.559 pdump: explicitly disabled via build config 00:01:45.559 proc-info: explicitly disabled via build config 00:01:45.559 test-acl: explicitly disabled via build config 00:01:45.559 test-bbdev: explicitly disabled via build config 00:01:45.559 test-cmdline: explicitly disabled via build config 00:01:45.559 test-compress-perf: explicitly disabled via build config 00:01:45.559 test-crypto-perf: explicitly disabled via build config 00:01:45.559 test-dma-perf: explicitly disabled via build config 00:01:45.559 test-eventdev: explicitly disabled via build config 00:01:45.559 test-fib: explicitly disabled via build config 00:01:45.559 test-flow-perf: explicitly disabled via build config 00:01:45.559 test-gpudev: explicitly disabled via build config 00:01:45.559 test-mldev: explicitly disabled via build config 00:01:45.559 test-pipeline: explicitly disabled via build config 00:01:45.559 test-pmd: explicitly disabled via build config 00:01:45.559 test-regex: explicitly disabled via build config 00:01:45.559 test-sad: explicitly disabled via build config 00:01:45.559 test-security-perf: explicitly disabled via build config 00:01:45.559 00:01:45.559 libs: 00:01:45.559 argparse: explicitly disabled via build config 00:01:45.559 metrics: explicitly disabled via build config 00:01:45.559 acl: explicitly disabled via build config 00:01:45.559 bbdev: explicitly disabled via build config 00:01:45.559 bitratestats: explicitly disabled via build config 00:01:45.559 bpf: explicitly disabled via build config 00:01:45.559 cfgfile: explicitly disabled via build config 00:01:45.559 distributor: explicitly disabled via build config 00:01:45.559 efd: explicitly disabled via build config 00:01:45.559 eventdev: explicitly disabled via build config 00:01:45.559 dispatcher: explicitly disabled via build config 00:01:45.559 gpudev: explicitly disabled via build config 00:01:45.559 gro: explicitly disabled via build config 00:01:45.559 gso: explicitly disabled via build config 00:01:45.559 ip_frag: explicitly disabled via build config 00:01:45.559 jobstats: explicitly disabled via build config 00:01:45.559 latencystats: explicitly disabled via build config 00:01:45.559 lpm: explicitly disabled via build config 00:01:45.559 member: explicitly disabled via build config 00:01:45.559 pcapng: explicitly disabled via build config 00:01:45.560 rawdev: explicitly disabled via build config 00:01:45.560 regexdev: explicitly disabled via build config 00:01:45.560 mldev: explicitly disabled via build config 00:01:45.560 rib: explicitly disabled via build config 00:01:45.560 sched: explicitly disabled via build config 00:01:45.560 stack: explicitly disabled via build config 00:01:45.560 ipsec: explicitly disabled via build config 00:01:45.560 pdcp: explicitly disabled via build config 00:01:45.560 fib: explicitly disabled via build config 00:01:45.560 port: explicitly disabled via build config 00:01:45.560 pdump: explicitly disabled via build config 00:01:45.560 table: explicitly disabled via build config 00:01:45.560 pipeline: explicitly disabled via build config 00:01:45.560 graph: explicitly disabled via build config 00:01:45.560 node: explicitly disabled via build config 00:01:45.560 00:01:45.560 drivers: 00:01:45.560 common/cpt: not in enabled drivers build config 00:01:45.560 common/dpaax: not in enabled drivers build config 00:01:45.560 common/iavf: not in enabled drivers build config 00:01:45.560 common/idpf: not in enabled drivers build config 00:01:45.560 common/ionic: not in enabled drivers build config 00:01:45.560 common/mvep: not in enabled drivers build config 00:01:45.560 common/octeontx: not in enabled drivers build config 00:01:45.560 bus/auxiliary: not in enabled drivers build config 00:01:45.560 bus/cdx: not in enabled drivers build config 00:01:45.560 bus/dpaa: not in enabled drivers build config 00:01:45.560 bus/fslmc: not in enabled drivers build config 00:01:45.560 bus/ifpga: not in enabled drivers build config 00:01:45.560 bus/platform: not in enabled drivers build config 00:01:45.560 bus/uacce: not in enabled drivers build config 00:01:45.560 bus/vmbus: not in enabled drivers build config 00:01:45.560 common/cnxk: not in enabled drivers build config 00:01:45.560 common/mlx5: not in enabled drivers build config 00:01:45.560 common/nfp: not in enabled drivers build config 00:01:45.560 common/nitrox: not in enabled drivers build config 00:01:45.560 common/qat: not in enabled drivers build config 00:01:45.560 common/sfc_efx: not in enabled drivers build config 00:01:45.560 mempool/bucket: not in enabled drivers build config 00:01:45.560 mempool/cnxk: not in enabled drivers build config 00:01:45.560 mempool/dpaa: not in enabled drivers build config 00:01:45.560 mempool/dpaa2: not in enabled drivers build config 00:01:45.560 mempool/octeontx: not in enabled drivers build config 00:01:45.560 mempool/stack: not in enabled drivers build config 00:01:45.560 dma/cnxk: not in enabled drivers build config 00:01:45.560 dma/dpaa: not in enabled drivers build config 00:01:45.560 dma/dpaa2: not in enabled drivers build config 00:01:45.560 dma/hisilicon: not in enabled drivers build config 00:01:45.560 dma/idxd: not in enabled drivers build config 00:01:45.560 dma/ioat: not in enabled drivers build config 00:01:45.560 dma/skeleton: not in enabled drivers build config 00:01:45.560 net/af_packet: not in enabled drivers build config 00:01:45.560 net/af_xdp: not in enabled drivers build config 00:01:45.560 net/ark: not in enabled drivers build config 00:01:45.560 net/atlantic: not in enabled drivers build config 00:01:45.560 net/avp: not in enabled drivers build config 00:01:45.560 net/axgbe: not in enabled drivers build config 00:01:45.560 net/bnx2x: not in enabled drivers build config 00:01:45.560 net/bnxt: not in enabled drivers build config 00:01:45.560 net/bonding: not in enabled drivers build config 00:01:45.560 net/cnxk: not in enabled drivers build config 00:01:45.560 net/cpfl: not in enabled drivers build config 00:01:45.560 net/cxgbe: not in enabled drivers build config 00:01:45.560 net/dpaa: not in enabled drivers build config 00:01:45.560 net/dpaa2: not in enabled drivers build config 00:01:45.560 net/e1000: not in enabled drivers build config 00:01:45.560 net/ena: not in enabled drivers build config 00:01:45.560 net/enetc: not in enabled drivers build config 00:01:45.560 net/enetfec: not in enabled drivers build config 00:01:45.560 net/enic: not in enabled drivers build config 00:01:45.560 net/failsafe: not in enabled drivers build config 00:01:45.560 net/fm10k: not in enabled drivers build config 00:01:45.560 net/gve: not in enabled drivers build config 00:01:45.560 net/hinic: not in enabled drivers build config 00:01:45.560 net/hns3: not in enabled drivers build config 00:01:45.560 net/i40e: not in enabled drivers build config 00:01:45.560 net/iavf: not in enabled drivers build config 00:01:45.560 net/ice: not in enabled drivers build config 00:01:45.560 net/idpf: not in enabled drivers build config 00:01:45.560 net/igc: not in enabled drivers build config 00:01:45.560 net/ionic: not in enabled drivers build config 00:01:45.560 net/ipn3ke: not in enabled drivers build config 00:01:45.560 net/ixgbe: not in enabled drivers build config 00:01:45.560 net/mana: not in enabled drivers build config 00:01:45.560 net/memif: not in enabled drivers build config 00:01:45.560 net/mlx4: not in enabled drivers build config 00:01:45.560 net/mlx5: not in enabled drivers build config 00:01:45.560 net/mvneta: not in enabled drivers build config 00:01:45.560 net/mvpp2: not in enabled drivers build config 00:01:45.560 net/netvsc: not in enabled drivers build config 00:01:45.560 net/nfb: not in enabled drivers build config 00:01:45.560 net/nfp: not in enabled drivers build config 00:01:45.560 net/ngbe: not in enabled drivers build config 00:01:45.560 net/null: not in enabled drivers build config 00:01:45.560 net/octeontx: not in enabled drivers build config 00:01:45.560 net/octeon_ep: not in enabled drivers build config 00:01:45.560 net/pcap: not in enabled drivers build config 00:01:45.560 net/pfe: not in enabled drivers build config 00:01:45.560 net/qede: not in enabled drivers build config 00:01:45.560 net/ring: not in enabled drivers build config 00:01:45.560 net/sfc: not in enabled drivers build config 00:01:45.560 net/softnic: not in enabled drivers build config 00:01:45.560 net/tap: not in enabled drivers build config 00:01:45.560 net/thunderx: not in enabled drivers build config 00:01:45.560 net/txgbe: not in enabled drivers build config 00:01:45.560 net/vdev_netvsc: not in enabled drivers build config 00:01:45.560 net/vhost: not in enabled drivers build config 00:01:45.560 net/virtio: not in enabled drivers build config 00:01:45.560 net/vmxnet3: not in enabled drivers build config 00:01:45.560 raw/*: missing internal dependency, "rawdev" 00:01:45.560 crypto/armv8: not in enabled drivers build config 00:01:45.560 crypto/bcmfs: not in enabled drivers build config 00:01:45.560 crypto/caam_jr: not in enabled drivers build config 00:01:45.560 crypto/ccp: not in enabled drivers build config 00:01:45.560 crypto/cnxk: not in enabled drivers build config 00:01:45.560 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.560 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.560 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.560 crypto/mlx5: not in enabled drivers build config 00:01:45.560 crypto/mvsam: not in enabled drivers build config 00:01:45.560 crypto/nitrox: not in enabled drivers build config 00:01:45.560 crypto/null: not in enabled drivers build config 00:01:45.560 crypto/octeontx: not in enabled drivers build config 00:01:45.560 crypto/openssl: not in enabled drivers build config 00:01:45.560 crypto/scheduler: not in enabled drivers build config 00:01:45.560 crypto/uadk: not in enabled drivers build config 00:01:45.560 crypto/virtio: not in enabled drivers build config 00:01:45.560 compress/isal: not in enabled drivers build config 00:01:45.560 compress/mlx5: not in enabled drivers build config 00:01:45.560 compress/nitrox: not in enabled drivers build config 00:01:45.560 compress/octeontx: not in enabled drivers build config 00:01:45.560 compress/zlib: not in enabled drivers build config 00:01:45.560 regex/*: missing internal dependency, "regexdev" 00:01:45.561 ml/*: missing internal dependency, "mldev" 00:01:45.561 vdpa/ifc: not in enabled drivers build config 00:01:45.561 vdpa/mlx5: not in enabled drivers build config 00:01:45.561 vdpa/nfp: not in enabled drivers build config 00:01:45.561 vdpa/sfc: not in enabled drivers build config 00:01:45.561 event/*: missing internal dependency, "eventdev" 00:01:45.561 baseband/*: missing internal dependency, "bbdev" 00:01:45.561 gpu/*: missing internal dependency, "gpudev" 00:01:45.561 00:01:45.561 00:01:46.127 Build targets in project: 85 00:01:46.127 00:01:46.127 DPDK 24.03.0 00:01:46.127 00:01:46.127 User defined options 00:01:46.127 buildtype : debug 00:01:46.127 default_library : shared 00:01:46.127 libdir : lib 00:01:46.127 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:46.127 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.127 c_link_args : 00:01:46.127 cpu_instruction_set: native 00:01:46.127 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:46.127 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:46.127 enable_docs : false 00:01:46.127 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.127 enable_kmods : false 00:01:46.127 max_lcores : 128 00:01:46.127 tests : false 00:01:46.127 00:01:46.127 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.391 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.663 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.663 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.663 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.663 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.663 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.663 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.663 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.663 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.663 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.663 [10/268] Linking static target lib/librte_kvargs.a 00:01:46.663 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.663 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.663 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.663 [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.663 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.663 [16/268] Linking static target lib/librte_log.a 00:01:47.237 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.499 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.499 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.499 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.499 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.499 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.499 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.499 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.499 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.499 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.499 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.499 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.499 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.499 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.499 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.499 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.499 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.499 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.499 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.499 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.499 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.499 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.499 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.499 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.499 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.499 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.499 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.499 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.499 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.499 [46/268] Linking static target lib/librte_telemetry.a 00:01:47.499 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.499 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.499 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.499 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.499 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.499 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.499 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.499 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.499 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.499 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.758 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.758 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.758 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.758 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.758 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.758 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.758 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.758 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.758 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.758 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.019 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.019 [68/268] Linking static target lib/librte_pci.a 00:01:48.019 [69/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.019 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.019 [71/268] Linking target lib/librte_log.so.24.1 00:01:48.283 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.283 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.283 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.283 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.283 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.283 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.283 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.283 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.283 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.544 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.544 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.544 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.544 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.544 [85/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.544 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.544 [87/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.544 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.544 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.544 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.544 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.544 [92/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.544 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.544 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.544 [95/268] Linking static target lib/librte_ring.a 00:01:48.544 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.544 [97/268] Linking target lib/librte_kvargs.so.24.1 00:01:48.544 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.544 [99/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.544 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.544 [101/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.544 [102/268] Linking static target lib/librte_meter.a 00:01:48.544 [103/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.544 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.544 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.544 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.544 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.544 [108/268] Linking target lib/librte_telemetry.so.24.1 00:01:48.544 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.810 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.810 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.810 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.810 [113/268] Linking static target lib/librte_rcu.a 00:01:48.810 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.810 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.810 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.810 [117/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.810 [118/268] Linking static target lib/librte_mempool.a 00:01:48.810 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.810 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.810 [121/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.810 [122/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.810 [123/268] Linking static target lib/librte_eal.a 00:01:48.810 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.810 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.810 [126/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.068 [127/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.068 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.068 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.068 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.068 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.068 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.068 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.068 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.068 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.068 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.068 [137/268] Linking static target lib/librte_net.a 00:01:49.326 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.327 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.327 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.327 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.327 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.327 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.327 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.327 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.327 [146/268] Linking static target lib/librte_cmdline.a 00:01:49.327 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.590 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.590 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.590 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.590 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.590 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.590 [153/268] Linking static target lib/librte_timer.a 00:01:49.590 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.590 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.590 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.590 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.590 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.849 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.849 [160/268] Linking static target lib/librte_dmadev.a 00:01:49.849 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.849 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:49.849 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:49.849 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:49.849 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:49.849 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.849 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:49.849 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.849 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.107 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.107 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.107 [172/268] Linking static target lib/librte_compressdev.a 00:01:50.107 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.107 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.107 [175/268] Linking static target lib/librte_power.a 00:01:50.107 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.107 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.107 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.107 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.107 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.107 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.107 [182/268] Linking static target lib/librte_hash.a 00:01:50.107 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.107 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.107 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.365 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.365 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.365 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.365 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.365 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.365 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.365 [192/268] Linking static target lib/librte_reorder.a 00:01:50.365 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.365 [194/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.365 [195/268] Linking static target lib/librte_mbuf.a 00:01:50.365 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.365 [197/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.365 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.365 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.365 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.365 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:50.622 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.622 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.622 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.622 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.622 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.622 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:50.622 [208/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.622 [209/268] Linking static target lib/librte_security.a 00:01:50.622 [210/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.622 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.622 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.622 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.622 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.622 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.622 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.880 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.880 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.880 [219/268] Linking static target drivers/librte_mempool_ring.a 00:01:50.880 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.880 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.880 [222/268] Linking static target lib/librte_cryptodev.a 00:01:50.880 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.880 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.138 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.138 [226/268] Linking static target lib/librte_ethdev.a 00:01:52.071 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.444 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.817 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.817 [230/268] Linking target lib/librte_eal.so.24.1 00:01:55.073 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:55.073 [232/268] Linking target lib/librte_ring.so.24.1 00:01:55.073 [233/268] Linking target lib/librte_meter.so.24.1 00:01:55.073 [234/268] Linking target lib/librte_pci.so.24.1 00:01:55.073 [235/268] Linking target lib/librte_timer.so.24.1 00:01:55.073 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:55.073 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:55.073 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.073 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:55.073 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:55.073 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:55.330 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:55.330 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:55.330 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:55.330 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:55.330 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:55.330 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:55.330 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:55.330 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:55.330 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:55.589 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:55.589 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:55.589 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:55.589 [254/268] Linking target lib/librte_net.so.24.1 00:01:55.589 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:55.589 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:55.589 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:55.589 [258/268] Linking target lib/librte_security.so.24.1 00:01:55.589 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:55.589 [260/268] Linking target lib/librte_hash.so.24.1 00:01:55.847 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:55.847 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:55.847 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.847 [264/268] Linking target lib/librte_power.so.24.1 00:01:59.129 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.129 [266/268] Linking static target lib/librte_vhost.a 00:02:00.063 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.321 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:00.321 INFO: autodetecting backend as ninja 00:02:00.321 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:22.250 CC lib/log/log.o 00:02:22.250 CC lib/log/log_flags.o 00:02:22.250 CC lib/log/log_deprecated.o 00:02:22.250 CC lib/ut/ut.o 00:02:22.250 CC lib/ut_mock/mock.o 00:02:22.250 LIB libspdk_ut.a 00:02:22.250 LIB libspdk_ut_mock.a 00:02:22.250 LIB libspdk_log.a 00:02:22.250 SO libspdk_ut.so.2.0 00:02:22.250 SO libspdk_ut_mock.so.6.0 00:02:22.250 SO libspdk_log.so.7.1 00:02:22.250 SYMLINK libspdk_ut.so 00:02:22.250 SYMLINK libspdk_ut_mock.so 00:02:22.250 SYMLINK libspdk_log.so 00:02:22.250 CC lib/ioat/ioat.o 00:02:22.250 CXX lib/trace_parser/trace.o 00:02:22.250 CC lib/dma/dma.o 00:02:22.250 CC lib/util/base64.o 00:02:22.250 CC lib/util/bit_array.o 00:02:22.250 CC lib/util/cpuset.o 00:02:22.250 CC lib/util/crc16.o 00:02:22.250 CC lib/util/crc32.o 00:02:22.250 CC lib/util/crc32c.o 00:02:22.250 CC lib/util/crc32_ieee.o 00:02:22.250 CC lib/util/crc64.o 00:02:22.250 CC lib/util/dif.o 00:02:22.250 CC lib/util/fd.o 00:02:22.250 CC lib/util/fd_group.o 00:02:22.250 CC lib/util/file.o 00:02:22.250 CC lib/util/hexlify.o 00:02:22.250 CC lib/util/iov.o 00:02:22.250 CC lib/util/math.o 00:02:22.250 CC lib/util/net.o 00:02:22.250 CC lib/util/pipe.o 00:02:22.250 CC lib/util/strerror_tls.o 00:02:22.250 CC lib/util/string.o 00:02:22.250 CC lib/util/uuid.o 00:02:22.250 CC lib/util/xor.o 00:02:22.250 CC lib/util/zipf.o 00:02:22.250 CC lib/util/md5.o 00:02:22.250 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.250 CC lib/vfio_user/host/vfio_user.o 00:02:22.250 LIB libspdk_dma.a 00:02:22.250 SO libspdk_dma.so.5.0 00:02:22.250 SYMLINK libspdk_dma.so 00:02:22.250 LIB libspdk_ioat.a 00:02:22.250 SO libspdk_ioat.so.7.0 00:02:22.250 SYMLINK libspdk_ioat.so 00:02:22.250 LIB libspdk_vfio_user.a 00:02:22.250 SO libspdk_vfio_user.so.5.0 00:02:22.250 SYMLINK libspdk_vfio_user.so 00:02:22.250 LIB libspdk_util.a 00:02:22.250 SO libspdk_util.so.10.1 00:02:22.250 SYMLINK libspdk_util.so 00:02:22.250 CC lib/rdma_utils/rdma_utils.o 00:02:22.250 CC lib/env_dpdk/env.o 00:02:22.250 CC lib/idxd/idxd.o 00:02:22.250 CC lib/json/json_parse.o 00:02:22.250 CC lib/conf/conf.o 00:02:22.250 CC lib/json/json_util.o 00:02:22.250 CC lib/idxd/idxd_user.o 00:02:22.250 CC lib/vmd/vmd.o 00:02:22.250 CC lib/vmd/led.o 00:02:22.250 CC lib/idxd/idxd_kernel.o 00:02:22.250 CC lib/json/json_write.o 00:02:22.250 CC lib/env_dpdk/memory.o 00:02:22.250 CC lib/env_dpdk/pci.o 00:02:22.250 CC lib/env_dpdk/init.o 00:02:22.250 CC lib/env_dpdk/threads.o 00:02:22.250 CC lib/env_dpdk/pci_ioat.o 00:02:22.250 CC lib/env_dpdk/pci_virtio.o 00:02:22.250 CC lib/env_dpdk/pci_vmd.o 00:02:22.250 CC lib/env_dpdk/pci_idxd.o 00:02:22.250 CC lib/env_dpdk/pci_event.o 00:02:22.250 CC lib/env_dpdk/sigbus_handler.o 00:02:22.250 CC lib/env_dpdk/pci_dpdk.o 00:02:22.250 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.250 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.250 LIB libspdk_trace_parser.a 00:02:22.250 SO libspdk_trace_parser.so.6.0 00:02:22.250 SYMLINK libspdk_trace_parser.so 00:02:22.250 LIB libspdk_conf.a 00:02:22.250 SO libspdk_conf.so.6.0 00:02:22.250 LIB libspdk_rdma_utils.a 00:02:22.250 SYMLINK libspdk_conf.so 00:02:22.250 LIB libspdk_json.a 00:02:22.250 SO libspdk_rdma_utils.so.1.0 00:02:22.250 SO libspdk_json.so.6.0 00:02:22.250 SYMLINK libspdk_rdma_utils.so 00:02:22.250 SYMLINK libspdk_json.so 00:02:22.250 CC lib/rdma_provider/common.o 00:02:22.250 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:22.250 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.250 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.250 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.250 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.250 LIB libspdk_idxd.a 00:02:22.250 SO libspdk_idxd.so.12.1 00:02:22.250 LIB libspdk_vmd.a 00:02:22.250 SO libspdk_vmd.so.6.0 00:02:22.250 SYMLINK libspdk_idxd.so 00:02:22.250 SYMLINK libspdk_vmd.so 00:02:22.250 LIB libspdk_rdma_provider.a 00:02:22.250 SO libspdk_rdma_provider.so.7.0 00:02:22.250 LIB libspdk_jsonrpc.a 00:02:22.250 SYMLINK libspdk_rdma_provider.so 00:02:22.250 SO libspdk_jsonrpc.so.6.0 00:02:22.250 SYMLINK libspdk_jsonrpc.so 00:02:22.250 CC lib/rpc/rpc.o 00:02:22.508 LIB libspdk_rpc.a 00:02:22.508 SO libspdk_rpc.so.6.0 00:02:22.508 SYMLINK libspdk_rpc.so 00:02:22.766 CC lib/trace/trace.o 00:02:22.766 CC lib/trace/trace_flags.o 00:02:22.766 CC lib/notify/notify.o 00:02:22.766 CC lib/keyring/keyring.o 00:02:22.766 CC lib/notify/notify_rpc.o 00:02:22.766 CC lib/trace/trace_rpc.o 00:02:22.766 CC lib/keyring/keyring_rpc.o 00:02:23.024 LIB libspdk_notify.a 00:02:23.024 SO libspdk_notify.so.6.0 00:02:23.024 LIB libspdk_keyring.a 00:02:23.024 SYMLINK libspdk_notify.so 00:02:23.024 LIB libspdk_trace.a 00:02:23.024 SO libspdk_keyring.so.2.0 00:02:23.024 SO libspdk_trace.so.11.0 00:02:23.024 SYMLINK libspdk_keyring.so 00:02:23.024 SYMLINK libspdk_trace.so 00:02:23.283 LIB libspdk_env_dpdk.a 00:02:23.283 CC lib/thread/thread.o 00:02:23.283 CC lib/thread/iobuf.o 00:02:23.283 CC lib/sock/sock.o 00:02:23.283 CC lib/sock/sock_rpc.o 00:02:23.283 SO libspdk_env_dpdk.so.15.1 00:02:23.540 SYMLINK libspdk_env_dpdk.so 00:02:23.798 LIB libspdk_sock.a 00:02:23.798 SO libspdk_sock.so.10.0 00:02:23.798 SYMLINK libspdk_sock.so 00:02:24.056 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.056 CC lib/nvme/nvme_ctrlr.o 00:02:24.056 CC lib/nvme/nvme_fabric.o 00:02:24.056 CC lib/nvme/nvme_ns_cmd.o 00:02:24.057 CC lib/nvme/nvme_ns.o 00:02:24.057 CC lib/nvme/nvme_pcie_common.o 00:02:24.057 CC lib/nvme/nvme_pcie.o 00:02:24.057 CC lib/nvme/nvme_qpair.o 00:02:24.057 CC lib/nvme/nvme.o 00:02:24.057 CC lib/nvme/nvme_quirks.o 00:02:24.057 CC lib/nvme/nvme_transport.o 00:02:24.057 CC lib/nvme/nvme_discovery.o 00:02:24.057 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:24.057 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:24.057 CC lib/nvme/nvme_tcp.o 00:02:24.057 CC lib/nvme/nvme_opal.o 00:02:24.057 CC lib/nvme/nvme_io_msg.o 00:02:24.057 CC lib/nvme/nvme_poll_group.o 00:02:24.057 CC lib/nvme/nvme_zns.o 00:02:24.057 CC lib/nvme/nvme_stubs.o 00:02:24.057 CC lib/nvme/nvme_auth.o 00:02:24.057 CC lib/nvme/nvme_cuse.o 00:02:24.057 CC lib/nvme/nvme_vfio_user.o 00:02:24.057 CC lib/nvme/nvme_rdma.o 00:02:24.991 LIB libspdk_thread.a 00:02:24.991 SO libspdk_thread.so.11.0 00:02:24.991 SYMLINK libspdk_thread.so 00:02:25.249 CC lib/blob/blobstore.o 00:02:25.249 CC lib/vfu_tgt/tgt_endpoint.o 00:02:25.249 CC lib/virtio/virtio.o 00:02:25.249 CC lib/init/json_config.o 00:02:25.249 CC lib/fsdev/fsdev.o 00:02:25.249 CC lib/accel/accel.o 00:02:25.249 CC lib/blob/request.o 00:02:25.249 CC lib/virtio/virtio_vhost_user.o 00:02:25.249 CC lib/accel/accel_rpc.o 00:02:25.249 CC lib/fsdev/fsdev_io.o 00:02:25.249 CC lib/init/subsystem.o 00:02:25.249 CC lib/vfu_tgt/tgt_rpc.o 00:02:25.249 CC lib/virtio/virtio_vfio_user.o 00:02:25.249 CC lib/accel/accel_sw.o 00:02:25.249 CC lib/fsdev/fsdev_rpc.o 00:02:25.249 CC lib/blob/zeroes.o 00:02:25.249 CC lib/init/subsystem_rpc.o 00:02:25.249 CC lib/blob/blob_bs_dev.o 00:02:25.249 CC lib/virtio/virtio_pci.o 00:02:25.249 CC lib/init/rpc.o 00:02:25.508 LIB libspdk_init.a 00:02:25.508 SO libspdk_init.so.6.0 00:02:25.508 SYMLINK libspdk_init.so 00:02:25.508 LIB libspdk_virtio.a 00:02:25.508 LIB libspdk_vfu_tgt.a 00:02:25.508 SO libspdk_vfu_tgt.so.3.0 00:02:25.508 SO libspdk_virtio.so.7.0 00:02:25.508 SYMLINK libspdk_vfu_tgt.so 00:02:25.508 SYMLINK libspdk_virtio.so 00:02:25.766 CC lib/event/app.o 00:02:25.766 CC lib/event/reactor.o 00:02:25.766 CC lib/event/log_rpc.o 00:02:25.766 CC lib/event/app_rpc.o 00:02:25.766 CC lib/event/scheduler_static.o 00:02:25.766 LIB libspdk_fsdev.a 00:02:25.766 SO libspdk_fsdev.so.2.0 00:02:26.024 SYMLINK libspdk_fsdev.so 00:02:26.024 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:26.024 LIB libspdk_event.a 00:02:26.024 SO libspdk_event.so.14.0 00:02:26.282 SYMLINK libspdk_event.so 00:02:26.282 LIB libspdk_accel.a 00:02:26.282 SO libspdk_accel.so.16.0 00:02:26.282 SYMLINK libspdk_accel.so 00:02:26.540 LIB libspdk_nvme.a 00:02:26.540 SO libspdk_nvme.so.15.0 00:02:26.540 CC lib/bdev/bdev.o 00:02:26.540 CC lib/bdev/bdev_rpc.o 00:02:26.540 CC lib/bdev/bdev_zone.o 00:02:26.540 CC lib/bdev/part.o 00:02:26.540 CC lib/bdev/scsi_nvme.o 00:02:26.796 LIB libspdk_fuse_dispatcher.a 00:02:26.796 SO libspdk_fuse_dispatcher.so.1.0 00:02:26.796 SYMLINK libspdk_nvme.so 00:02:26.796 SYMLINK libspdk_fuse_dispatcher.so 00:02:28.187 LIB libspdk_blob.a 00:02:28.444 SO libspdk_blob.so.11.0 00:02:28.444 SYMLINK libspdk_blob.so 00:02:28.444 CC lib/blobfs/blobfs.o 00:02:28.444 CC lib/blobfs/tree.o 00:02:28.444 CC lib/lvol/lvol.o 00:02:29.376 LIB libspdk_bdev.a 00:02:29.376 SO libspdk_bdev.so.17.0 00:02:29.376 LIB libspdk_blobfs.a 00:02:29.377 SYMLINK libspdk_bdev.so 00:02:29.377 SO libspdk_blobfs.so.10.0 00:02:29.377 LIB libspdk_lvol.a 00:02:29.377 SYMLINK libspdk_blobfs.so 00:02:29.377 SO libspdk_lvol.so.10.0 00:02:29.642 SYMLINK libspdk_lvol.so 00:02:29.642 CC lib/ublk/ublk.o 00:02:29.642 CC lib/nbd/nbd.o 00:02:29.642 CC lib/ublk/ublk_rpc.o 00:02:29.642 CC lib/nvmf/ctrlr.o 00:02:29.642 CC lib/scsi/dev.o 00:02:29.642 CC lib/nbd/nbd_rpc.o 00:02:29.642 CC lib/nvmf/ctrlr_discovery.o 00:02:29.642 CC lib/scsi/lun.o 00:02:29.642 CC lib/ftl/ftl_core.o 00:02:29.642 CC lib/nvmf/ctrlr_bdev.o 00:02:29.642 CC lib/scsi/port.o 00:02:29.642 CC lib/ftl/ftl_init.o 00:02:29.642 CC lib/nvmf/subsystem.o 00:02:29.642 CC lib/scsi/scsi.o 00:02:29.642 CC lib/ftl/ftl_layout.o 00:02:29.642 CC lib/nvmf/nvmf.o 00:02:29.642 CC lib/scsi/scsi_bdev.o 00:02:29.642 CC lib/ftl/ftl_debug.o 00:02:29.642 CC lib/nvmf/nvmf_rpc.o 00:02:29.642 CC lib/scsi/scsi_pr.o 00:02:29.642 CC lib/ftl/ftl_io.o 00:02:29.642 CC lib/nvmf/transport.o 00:02:29.642 CC lib/ftl/ftl_sb.o 00:02:29.642 CC lib/scsi/scsi_rpc.o 00:02:29.642 CC lib/ftl/ftl_l2p.o 00:02:29.642 CC lib/nvmf/tcp.o 00:02:29.642 CC lib/nvmf/stubs.o 00:02:29.642 CC lib/scsi/task.o 00:02:29.642 CC lib/ftl/ftl_l2p_flat.o 00:02:29.642 CC lib/nvmf/mdns_server.o 00:02:29.642 CC lib/ftl/ftl_nv_cache.o 00:02:29.642 CC lib/nvmf/vfio_user.o 00:02:29.642 CC lib/ftl/ftl_band.o 00:02:29.642 CC lib/nvmf/rdma.o 00:02:29.642 CC lib/ftl/ftl_band_ops.o 00:02:29.642 CC lib/nvmf/auth.o 00:02:29.642 CC lib/ftl/ftl_writer.o 00:02:29.642 CC lib/ftl/ftl_rq.o 00:02:29.642 CC lib/ftl/ftl_reloc.o 00:02:29.642 CC lib/ftl/ftl_l2p_cache.o 00:02:29.642 CC lib/ftl/ftl_p2l.o 00:02:29.642 CC lib/ftl/ftl_p2l_log.o 00:02:29.642 CC lib/ftl/mngt/ftl_mngt.o 00:02:29.642 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:29.642 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:29.642 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:29.642 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:29.642 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:29.902 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:29.902 CC lib/ftl/utils/ftl_conf.o 00:02:30.164 CC lib/ftl/utils/ftl_md.o 00:02:30.164 CC lib/ftl/utils/ftl_mempool.o 00:02:30.164 CC lib/ftl/utils/ftl_bitmap.o 00:02:30.164 CC lib/ftl/utils/ftl_property.o 00:02:30.164 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:30.164 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:30.164 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:30.164 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:30.164 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:30.164 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:30.164 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:30.164 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:30.164 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:30.164 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:30.164 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:30.425 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:30.425 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:30.425 CC lib/ftl/base/ftl_base_dev.o 00:02:30.425 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.425 CC lib/ftl/ftl_trace.o 00:02:30.425 LIB libspdk_nbd.a 00:02:30.425 SO libspdk_nbd.so.7.0 00:02:30.425 LIB libspdk_scsi.a 00:02:30.425 SYMLINK libspdk_nbd.so 00:02:30.684 SO libspdk_scsi.so.9.0 00:02:30.684 SYMLINK libspdk_scsi.so 00:02:30.684 LIB libspdk_ublk.a 00:02:30.684 SO libspdk_ublk.so.3.0 00:02:30.942 SYMLINK libspdk_ublk.so 00:02:30.942 CC lib/vhost/vhost.o 00:02:30.942 CC lib/iscsi/conn.o 00:02:30.942 CC lib/vhost/vhost_rpc.o 00:02:30.942 CC lib/iscsi/init_grp.o 00:02:30.942 CC lib/iscsi/iscsi.o 00:02:30.942 CC lib/vhost/vhost_scsi.o 00:02:30.942 CC lib/iscsi/param.o 00:02:30.942 CC lib/vhost/vhost_blk.o 00:02:30.942 CC lib/iscsi/portal_grp.o 00:02:30.942 CC lib/vhost/rte_vhost_user.o 00:02:30.942 CC lib/iscsi/tgt_node.o 00:02:30.942 CC lib/iscsi/iscsi_subsystem.o 00:02:30.942 CC lib/iscsi/iscsi_rpc.o 00:02:30.942 CC lib/iscsi/task.o 00:02:31.200 LIB libspdk_ftl.a 00:02:31.200 SO libspdk_ftl.so.9.0 00:02:31.458 SYMLINK libspdk_ftl.so 00:02:32.025 LIB libspdk_vhost.a 00:02:32.025 SO libspdk_vhost.so.8.0 00:02:32.283 SYMLINK libspdk_vhost.so 00:02:32.283 LIB libspdk_nvmf.a 00:02:32.283 LIB libspdk_iscsi.a 00:02:32.283 SO libspdk_iscsi.so.8.0 00:02:32.283 SO libspdk_nvmf.so.20.0 00:02:32.542 SYMLINK libspdk_iscsi.so 00:02:32.542 SYMLINK libspdk_nvmf.so 00:02:32.813 CC module/vfu_device/vfu_virtio.o 00:02:32.813 CC module/vfu_device/vfu_virtio_blk.o 00:02:32.813 CC module/vfu_device/vfu_virtio_scsi.o 00:02:32.813 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.813 CC module/vfu_device/vfu_virtio_rpc.o 00:02:32.813 CC module/vfu_device/vfu_virtio_fs.o 00:02:32.813 CC module/accel/error/accel_error.o 00:02:32.813 CC module/accel/error/accel_error_rpc.o 00:02:32.813 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.813 CC module/accel/iaa/accel_iaa.o 00:02:32.813 CC module/accel/ioat/accel_ioat.o 00:02:32.813 CC module/keyring/file/keyring.o 00:02:32.813 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.813 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.813 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.813 CC module/keyring/file/keyring_rpc.o 00:02:32.813 CC module/fsdev/aio/fsdev_aio.o 00:02:32.813 CC module/sock/posix/posix.o 00:02:32.813 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.813 CC module/keyring/linux/keyring.o 00:02:32.813 CC module/fsdev/aio/linux_aio_mgr.o 00:02:32.813 CC module/accel/dsa/accel_dsa.o 00:02:32.813 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.813 CC module/blob/bdev/blob_bdev.o 00:02:32.813 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.814 CC module/keyring/linux/keyring_rpc.o 00:02:33.115 LIB libspdk_env_dpdk_rpc.a 00:02:33.115 SO libspdk_env_dpdk_rpc.so.6.0 00:02:33.115 SYMLINK libspdk_env_dpdk_rpc.so 00:02:33.115 LIB libspdk_keyring_file.a 00:02:33.115 LIB libspdk_scheduler_gscheduler.a 00:02:33.115 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.115 SO libspdk_scheduler_gscheduler.so.4.0 00:02:33.115 SO libspdk_keyring_file.so.2.0 00:02:33.115 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:33.115 LIB libspdk_accel_ioat.a 00:02:33.115 LIB libspdk_keyring_linux.a 00:02:33.115 LIB libspdk_scheduler_dynamic.a 00:02:33.115 LIB libspdk_accel_error.a 00:02:33.115 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.115 SYMLINK libspdk_keyring_file.so 00:02:33.115 SO libspdk_accel_ioat.so.6.0 00:02:33.115 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.115 SO libspdk_keyring_linux.so.1.0 00:02:33.115 SO libspdk_scheduler_dynamic.so.4.0 00:02:33.115 SO libspdk_accel_error.so.2.0 00:02:33.115 SYMLINK libspdk_accel_ioat.so 00:02:33.115 LIB libspdk_blob_bdev.a 00:02:33.115 SYMLINK libspdk_keyring_linux.so 00:02:33.115 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.373 LIB libspdk_accel_iaa.a 00:02:33.373 SYMLINK libspdk_accel_error.so 00:02:33.373 LIB libspdk_accel_dsa.a 00:02:33.373 SO libspdk_blob_bdev.so.11.0 00:02:33.373 SO libspdk_accel_iaa.so.3.0 00:02:33.373 SO libspdk_accel_dsa.so.5.0 00:02:33.373 SYMLINK libspdk_blob_bdev.so 00:02:33.373 SYMLINK libspdk_accel_iaa.so 00:02:33.373 SYMLINK libspdk_accel_dsa.so 00:02:33.633 LIB libspdk_vfu_device.a 00:02:33.633 SO libspdk_vfu_device.so.3.0 00:02:33.633 CC module/bdev/raid/bdev_raid.o 00:02:33.633 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.633 CC module/bdev/delay/vbdev_delay.o 00:02:33.633 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.633 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.633 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.633 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.633 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.633 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.633 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.633 CC module/bdev/error/vbdev_error.o 00:02:33.633 CC module/bdev/raid/raid0.o 00:02:33.633 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.633 CC module/bdev/nvme/bdev_nvme.o 00:02:33.633 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.633 CC module/bdev/gpt/gpt.o 00:02:33.633 CC module/bdev/split/vbdev_split.o 00:02:33.633 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.633 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.633 CC module/bdev/ftl/bdev_ftl.o 00:02:33.633 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.633 CC module/bdev/malloc/bdev_malloc.o 00:02:33.633 CC module/bdev/raid/raid1.o 00:02:33.633 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.633 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.633 CC module/bdev/nvme/nvme_rpc.o 00:02:33.633 CC module/bdev/raid/concat.o 00:02:33.633 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.633 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.633 CC module/bdev/null/bdev_null.o 00:02:33.633 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.633 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.633 CC module/bdev/null/bdev_null_rpc.o 00:02:33.634 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.634 CC module/bdev/nvme/vbdev_opal.o 00:02:33.634 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.634 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.634 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.634 CC module/bdev/aio/bdev_aio.o 00:02:33.634 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.634 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.634 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.634 SYMLINK libspdk_vfu_device.so 00:02:33.892 LIB libspdk_fsdev_aio.a 00:02:33.892 SO libspdk_fsdev_aio.so.1.0 00:02:33.892 LIB libspdk_sock_posix.a 00:02:33.892 SO libspdk_sock_posix.so.6.0 00:02:33.892 SYMLINK libspdk_fsdev_aio.so 00:02:33.892 LIB libspdk_blobfs_bdev.a 00:02:33.892 SYMLINK libspdk_sock_posix.so 00:02:33.892 SO libspdk_blobfs_bdev.so.6.0 00:02:34.150 LIB libspdk_bdev_split.a 00:02:34.151 LIB libspdk_bdev_null.a 00:02:34.151 SYMLINK libspdk_blobfs_bdev.so 00:02:34.151 SO libspdk_bdev_split.so.6.0 00:02:34.151 SO libspdk_bdev_null.so.6.0 00:02:34.151 LIB libspdk_bdev_error.a 00:02:34.151 SO libspdk_bdev_error.so.6.0 00:02:34.151 LIB libspdk_bdev_gpt.a 00:02:34.151 SYMLINK libspdk_bdev_split.so 00:02:34.151 LIB libspdk_bdev_malloc.a 00:02:34.151 SO libspdk_bdev_gpt.so.6.0 00:02:34.151 SYMLINK libspdk_bdev_null.so 00:02:34.151 SO libspdk_bdev_malloc.so.6.0 00:02:34.151 LIB libspdk_bdev_ftl.a 00:02:34.151 LIB libspdk_bdev_delay.a 00:02:34.151 SYMLINK libspdk_bdev_error.so 00:02:34.151 LIB libspdk_bdev_iscsi.a 00:02:34.151 SO libspdk_bdev_ftl.so.6.0 00:02:34.151 LIB libspdk_bdev_passthru.a 00:02:34.151 SO libspdk_bdev_delay.so.6.0 00:02:34.151 SYMLINK libspdk_bdev_gpt.so 00:02:34.151 SO libspdk_bdev_iscsi.so.6.0 00:02:34.151 LIB libspdk_bdev_zone_block.a 00:02:34.151 LIB libspdk_bdev_aio.a 00:02:34.151 SO libspdk_bdev_passthru.so.6.0 00:02:34.151 SYMLINK libspdk_bdev_malloc.so 00:02:34.151 SO libspdk_bdev_aio.so.6.0 00:02:34.151 SO libspdk_bdev_zone_block.so.6.0 00:02:34.151 SYMLINK libspdk_bdev_ftl.so 00:02:34.151 SYMLINK libspdk_bdev_delay.so 00:02:34.151 SYMLINK libspdk_bdev_iscsi.so 00:02:34.151 SYMLINK libspdk_bdev_passthru.so 00:02:34.151 SYMLINK libspdk_bdev_aio.so 00:02:34.151 SYMLINK libspdk_bdev_zone_block.so 00:02:34.409 LIB libspdk_bdev_virtio.a 00:02:34.409 LIB libspdk_bdev_lvol.a 00:02:34.409 SO libspdk_bdev_virtio.so.6.0 00:02:34.409 SO libspdk_bdev_lvol.so.6.0 00:02:34.409 SYMLINK libspdk_bdev_virtio.so 00:02:34.409 SYMLINK libspdk_bdev_lvol.so 00:02:34.976 LIB libspdk_bdev_raid.a 00:02:34.976 SO libspdk_bdev_raid.so.6.0 00:02:34.976 SYMLINK libspdk_bdev_raid.so 00:02:36.355 LIB libspdk_bdev_nvme.a 00:02:36.355 SO libspdk_bdev_nvme.so.7.1 00:02:36.355 SYMLINK libspdk_bdev_nvme.so 00:02:36.921 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:36.921 CC module/event/subsystems/sock/sock.o 00:02:36.921 CC module/event/subsystems/fsdev/fsdev.o 00:02:36.921 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.921 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.921 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.921 CC module/event/subsystems/keyring/keyring.o 00:02:36.921 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.921 CC module/event/subsystems/vmd/vmd.o 00:02:36.921 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.921 LIB libspdk_event_keyring.a 00:02:36.921 LIB libspdk_event_vhost_blk.a 00:02:36.921 LIB libspdk_event_fsdev.a 00:02:36.921 LIB libspdk_event_scheduler.a 00:02:36.921 LIB libspdk_event_vfu_tgt.a 00:02:36.921 LIB libspdk_event_vmd.a 00:02:36.921 LIB libspdk_event_sock.a 00:02:36.921 SO libspdk_event_keyring.so.1.0 00:02:36.921 SO libspdk_event_vhost_blk.so.3.0 00:02:36.921 LIB libspdk_event_iobuf.a 00:02:36.921 SO libspdk_event_scheduler.so.4.0 00:02:36.921 SO libspdk_event_fsdev.so.1.0 00:02:36.921 SO libspdk_event_vfu_tgt.so.3.0 00:02:36.921 SO libspdk_event_sock.so.5.0 00:02:36.921 SO libspdk_event_vmd.so.6.0 00:02:36.921 SO libspdk_event_iobuf.so.3.0 00:02:36.921 SYMLINK libspdk_event_keyring.so 00:02:36.921 SYMLINK libspdk_event_vhost_blk.so 00:02:36.921 SYMLINK libspdk_event_scheduler.so 00:02:36.921 SYMLINK libspdk_event_fsdev.so 00:02:36.921 SYMLINK libspdk_event_vfu_tgt.so 00:02:36.921 SYMLINK libspdk_event_sock.so 00:02:36.921 SYMLINK libspdk_event_vmd.so 00:02:36.921 SYMLINK libspdk_event_iobuf.so 00:02:37.180 CC module/event/subsystems/accel/accel.o 00:02:37.439 LIB libspdk_event_accel.a 00:02:37.439 SO libspdk_event_accel.so.6.0 00:02:37.439 SYMLINK libspdk_event_accel.so 00:02:37.696 CC module/event/subsystems/bdev/bdev.o 00:02:37.696 LIB libspdk_event_bdev.a 00:02:37.696 SO libspdk_event_bdev.so.6.0 00:02:37.954 SYMLINK libspdk_event_bdev.so 00:02:37.954 CC module/event/subsystems/nbd/nbd.o 00:02:37.954 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.954 CC module/event/subsystems/ublk/ublk.o 00:02:37.954 CC module/event/subsystems/scsi/scsi.o 00:02:37.954 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.211 LIB libspdk_event_ublk.a 00:02:38.211 LIB libspdk_event_nbd.a 00:02:38.211 LIB libspdk_event_scsi.a 00:02:38.211 SO libspdk_event_ublk.so.3.0 00:02:38.211 SO libspdk_event_nbd.so.6.0 00:02:38.211 SO libspdk_event_scsi.so.6.0 00:02:38.211 SYMLINK libspdk_event_ublk.so 00:02:38.211 SYMLINK libspdk_event_nbd.so 00:02:38.211 SYMLINK libspdk_event_scsi.so 00:02:38.211 LIB libspdk_event_nvmf.a 00:02:38.211 SO libspdk_event_nvmf.so.6.0 00:02:38.469 SYMLINK libspdk_event_nvmf.so 00:02:38.469 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.469 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.469 LIB libspdk_event_vhost_scsi.a 00:02:38.469 LIB libspdk_event_iscsi.a 00:02:38.469 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.727 SO libspdk_event_iscsi.so.6.0 00:02:38.727 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.727 SYMLINK libspdk_event_iscsi.so 00:02:38.727 SO libspdk.so.6.0 00:02:38.727 SYMLINK libspdk.so 00:02:38.990 CXX app/trace/trace.o 00:02:38.990 CC app/trace_record/trace_record.o 00:02:38.990 CC app/spdk_nvme_identify/identify.o 00:02:38.990 CC app/spdk_top/spdk_top.o 00:02:38.990 CC app/spdk_lspci/spdk_lspci.o 00:02:38.990 CC app/spdk_nvme_discover/discovery_aer.o 00:02:38.990 CC app/spdk_nvme_perf/perf.o 00:02:38.990 TEST_HEADER include/spdk/accel.h 00:02:38.990 TEST_HEADER include/spdk/accel_module.h 00:02:38.990 CC test/rpc_client/rpc_client_test.o 00:02:38.990 TEST_HEADER include/spdk/assert.h 00:02:38.990 TEST_HEADER include/spdk/barrier.h 00:02:38.990 TEST_HEADER include/spdk/base64.h 00:02:38.990 TEST_HEADER include/spdk/bdev.h 00:02:38.990 TEST_HEADER include/spdk/bdev_module.h 00:02:38.990 TEST_HEADER include/spdk/bdev_zone.h 00:02:38.990 TEST_HEADER include/spdk/bit_pool.h 00:02:38.990 TEST_HEADER include/spdk/bit_array.h 00:02:38.990 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.990 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.990 TEST_HEADER include/spdk/blobfs.h 00:02:38.990 TEST_HEADER include/spdk/blob.h 00:02:38.990 TEST_HEADER include/spdk/conf.h 00:02:38.990 TEST_HEADER include/spdk/config.h 00:02:38.990 TEST_HEADER include/spdk/cpuset.h 00:02:38.990 TEST_HEADER include/spdk/crc16.h 00:02:38.990 TEST_HEADER include/spdk/crc64.h 00:02:38.990 TEST_HEADER include/spdk/crc32.h 00:02:38.990 TEST_HEADER include/spdk/dif.h 00:02:38.990 TEST_HEADER include/spdk/dma.h 00:02:38.990 TEST_HEADER include/spdk/endian.h 00:02:38.990 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.990 TEST_HEADER include/spdk/env.h 00:02:38.990 TEST_HEADER include/spdk/event.h 00:02:38.990 TEST_HEADER include/spdk/fd_group.h 00:02:38.990 TEST_HEADER include/spdk/fd.h 00:02:38.990 TEST_HEADER include/spdk/file.h 00:02:38.990 TEST_HEADER include/spdk/fsdev.h 00:02:38.990 TEST_HEADER include/spdk/fsdev_module.h 00:02:38.990 TEST_HEADER include/spdk/ftl.h 00:02:38.990 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:38.990 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.990 TEST_HEADER include/spdk/hexlify.h 00:02:38.990 TEST_HEADER include/spdk/histogram_data.h 00:02:38.990 TEST_HEADER include/spdk/idxd.h 00:02:38.990 TEST_HEADER include/spdk/init.h 00:02:38.990 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.990 TEST_HEADER include/spdk/ioat.h 00:02:38.990 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.990 TEST_HEADER include/spdk/json.h 00:02:38.990 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.990 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.990 TEST_HEADER include/spdk/keyring.h 00:02:38.990 TEST_HEADER include/spdk/keyring_module.h 00:02:38.990 TEST_HEADER include/spdk/likely.h 00:02:38.990 TEST_HEADER include/spdk/log.h 00:02:38.990 TEST_HEADER include/spdk/lvol.h 00:02:38.990 TEST_HEADER include/spdk/md5.h 00:02:38.990 TEST_HEADER include/spdk/memory.h 00:02:38.990 TEST_HEADER include/spdk/mmio.h 00:02:38.990 TEST_HEADER include/spdk/nbd.h 00:02:38.990 TEST_HEADER include/spdk/notify.h 00:02:38.990 TEST_HEADER include/spdk/net.h 00:02:38.990 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.990 TEST_HEADER include/spdk/nvme.h 00:02:38.990 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.990 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:38.990 TEST_HEADER include/spdk/nvme_spec.h 00:02:38.990 TEST_HEADER include/spdk/nvme_zns.h 00:02:38.990 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:38.990 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:38.990 TEST_HEADER include/spdk/nvmf.h 00:02:38.990 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.990 TEST_HEADER include/spdk/nvmf_transport.h 00:02:38.990 CC app/spdk_dd/spdk_dd.o 00:02:38.990 TEST_HEADER include/spdk/opal_spec.h 00:02:38.990 TEST_HEADER include/spdk/opal.h 00:02:38.990 TEST_HEADER include/spdk/pci_ids.h 00:02:38.990 TEST_HEADER include/spdk/pipe.h 00:02:38.990 TEST_HEADER include/spdk/queue.h 00:02:38.990 TEST_HEADER include/spdk/reduce.h 00:02:38.990 TEST_HEADER include/spdk/rpc.h 00:02:38.990 TEST_HEADER include/spdk/scheduler.h 00:02:38.990 TEST_HEADER include/spdk/scsi.h 00:02:38.990 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.990 TEST_HEADER include/spdk/stdinc.h 00:02:38.990 TEST_HEADER include/spdk/sock.h 00:02:38.990 TEST_HEADER include/spdk/string.h 00:02:38.990 TEST_HEADER include/spdk/thread.h 00:02:38.990 TEST_HEADER include/spdk/trace.h 00:02:38.990 TEST_HEADER include/spdk/trace_parser.h 00:02:38.990 TEST_HEADER include/spdk/tree.h 00:02:38.990 TEST_HEADER include/spdk/ublk.h 00:02:38.990 TEST_HEADER include/spdk/util.h 00:02:38.990 TEST_HEADER include/spdk/version.h 00:02:38.990 TEST_HEADER include/spdk/uuid.h 00:02:38.990 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.990 TEST_HEADER include/spdk/vhost.h 00:02:38.990 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:38.990 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:38.990 TEST_HEADER include/spdk/vmd.h 00:02:38.990 TEST_HEADER include/spdk/xor.h 00:02:38.990 TEST_HEADER include/spdk/zipf.h 00:02:38.990 CXX test/cpp_headers/accel.o 00:02:38.990 CXX test/cpp_headers/assert.o 00:02:38.990 CXX test/cpp_headers/accel_module.o 00:02:38.990 CXX test/cpp_headers/barrier.o 00:02:38.990 CXX test/cpp_headers/base64.o 00:02:38.990 CXX test/cpp_headers/bdev.o 00:02:38.990 CXX test/cpp_headers/bdev_module.o 00:02:38.990 CXX test/cpp_headers/bdev_zone.o 00:02:38.990 CXX test/cpp_headers/bit_array.o 00:02:38.990 CXX test/cpp_headers/bit_pool.o 00:02:38.990 CXX test/cpp_headers/blob_bdev.o 00:02:38.990 CXX test/cpp_headers/blobfs_bdev.o 00:02:38.990 CXX test/cpp_headers/blobfs.o 00:02:38.990 CXX test/cpp_headers/blob.o 00:02:38.990 CXX test/cpp_headers/conf.o 00:02:38.990 CXX test/cpp_headers/config.o 00:02:38.990 CXX test/cpp_headers/cpuset.o 00:02:38.990 CXX test/cpp_headers/crc16.o 00:02:38.990 CC app/nvmf_tgt/nvmf_main.o 00:02:38.990 CC app/iscsi_tgt/iscsi_tgt.o 00:02:38.990 CC app/spdk_tgt/spdk_tgt.o 00:02:38.990 CC examples/ioat/perf/perf.o 00:02:38.990 CXX test/cpp_headers/crc32.o 00:02:38.990 CC examples/util/zipf/zipf.o 00:02:38.990 CC examples/ioat/verify/verify.o 00:02:38.990 CC app/fio/nvme/fio_plugin.o 00:02:38.990 CC test/thread/poller_perf/poller_perf.o 00:02:38.990 CC test/app/jsoncat/jsoncat.o 00:02:38.990 CC test/app/histogram_perf/histogram_perf.o 00:02:38.990 CC test/env/pci/pci_ut.o 00:02:38.990 CC test/env/vtophys/vtophys.o 00:02:38.990 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:38.990 CC test/env/memory/memory_ut.o 00:02:38.990 CC test/app/stub/stub.o 00:02:39.253 CC app/fio/bdev/fio_plugin.o 00:02:39.253 CC test/app/bdev_svc/bdev_svc.o 00:02:39.253 CC test/dma/test_dma/test_dma.o 00:02:39.253 LINK spdk_lspci 00:02:39.253 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.253 LINK rpc_client_test 00:02:39.253 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.514 LINK spdk_nvme_discover 00:02:39.514 LINK interrupt_tgt 00:02:39.514 LINK jsoncat 00:02:39.514 LINK zipf 00:02:39.514 LINK poller_perf 00:02:39.514 CXX test/cpp_headers/crc64.o 00:02:39.514 LINK histogram_perf 00:02:39.514 LINK vtophys 00:02:39.514 CXX test/cpp_headers/dif.o 00:02:39.514 CXX test/cpp_headers/dma.o 00:02:39.514 CXX test/cpp_headers/endian.o 00:02:39.514 LINK nvmf_tgt 00:02:39.514 LINK spdk_trace_record 00:02:39.514 CXX test/cpp_headers/env_dpdk.o 00:02:39.514 CXX test/cpp_headers/env.o 00:02:39.514 CXX test/cpp_headers/event.o 00:02:39.514 CXX test/cpp_headers/fd_group.o 00:02:39.514 CXX test/cpp_headers/fd.o 00:02:39.514 CXX test/cpp_headers/file.o 00:02:39.514 LINK env_dpdk_post_init 00:02:39.514 LINK stub 00:02:39.514 CXX test/cpp_headers/fsdev.o 00:02:39.514 CXX test/cpp_headers/fsdev_module.o 00:02:39.514 CXX test/cpp_headers/ftl.o 00:02:39.514 CXX test/cpp_headers/fuse_dispatcher.o 00:02:39.514 CXX test/cpp_headers/gpt_spec.o 00:02:39.514 LINK verify 00:02:39.514 CXX test/cpp_headers/hexlify.o 00:02:39.514 LINK ioat_perf 00:02:39.514 LINK bdev_svc 00:02:39.514 LINK iscsi_tgt 00:02:39.514 LINK spdk_tgt 00:02:39.514 CXX test/cpp_headers/histogram_data.o 00:02:39.775 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.775 CXX test/cpp_headers/idxd.o 00:02:39.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.775 CXX test/cpp_headers/idxd_spec.o 00:02:39.775 CXX test/cpp_headers/init.o 00:02:39.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.775 LINK spdk_dd 00:02:39.775 CXX test/cpp_headers/ioat.o 00:02:39.775 CXX test/cpp_headers/ioat_spec.o 00:02:39.775 LINK spdk_trace 00:02:39.775 CXX test/cpp_headers/iscsi_spec.o 00:02:39.775 CXX test/cpp_headers/json.o 00:02:39.775 CXX test/cpp_headers/jsonrpc.o 00:02:39.775 CXX test/cpp_headers/keyring.o 00:02:39.775 CXX test/cpp_headers/keyring_module.o 00:02:39.775 CXX test/cpp_headers/likely.o 00:02:39.775 CXX test/cpp_headers/log.o 00:02:40.037 CXX test/cpp_headers/lvol.o 00:02:40.037 CXX test/cpp_headers/md5.o 00:02:40.037 CXX test/cpp_headers/memory.o 00:02:40.037 CXX test/cpp_headers/mmio.o 00:02:40.037 CXX test/cpp_headers/nbd.o 00:02:40.037 CXX test/cpp_headers/net.o 00:02:40.037 CXX test/cpp_headers/notify.o 00:02:40.037 CXX test/cpp_headers/nvme.o 00:02:40.037 LINK pci_ut 00:02:40.037 CXX test/cpp_headers/nvme_intel.o 00:02:40.037 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.037 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.037 CXX test/cpp_headers/nvme_spec.o 00:02:40.037 CXX test/cpp_headers/nvme_zns.o 00:02:40.037 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.037 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.037 CC examples/sock/hello_world/hello_sock.o 00:02:40.037 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.037 CXX test/cpp_headers/nvmf.o 00:02:40.037 CXX test/cpp_headers/nvmf_spec.o 00:02:40.037 CXX test/cpp_headers/nvmf_transport.o 00:02:40.037 CC examples/idxd/perf/perf.o 00:02:40.298 CC examples/thread/thread/thread_ex.o 00:02:40.298 CC examples/vmd/led/led.o 00:02:40.298 LINK spdk_bdev 00:02:40.298 CC test/event/event_perf/event_perf.o 00:02:40.298 LINK nvme_fuzz 00:02:40.298 LINK spdk_nvme 00:02:40.298 CXX test/cpp_headers/opal.o 00:02:40.298 CXX test/cpp_headers/opal_spec.o 00:02:40.298 CC test/event/reactor_perf/reactor_perf.o 00:02:40.298 CC test/event/reactor/reactor.o 00:02:40.298 CXX test/cpp_headers/pci_ids.o 00:02:40.298 CC test/event/app_repeat/app_repeat.o 00:02:40.298 LINK test_dma 00:02:40.298 CXX test/cpp_headers/pipe.o 00:02:40.298 CXX test/cpp_headers/queue.o 00:02:40.298 CC test/event/scheduler/scheduler.o 00:02:40.298 CXX test/cpp_headers/reduce.o 00:02:40.298 CXX test/cpp_headers/rpc.o 00:02:40.298 CXX test/cpp_headers/scheduler.o 00:02:40.298 CXX test/cpp_headers/scsi.o 00:02:40.298 CXX test/cpp_headers/scsi_spec.o 00:02:40.298 CXX test/cpp_headers/sock.o 00:02:40.298 CXX test/cpp_headers/stdinc.o 00:02:40.298 CXX test/cpp_headers/string.o 00:02:40.298 CXX test/cpp_headers/thread.o 00:02:40.298 CXX test/cpp_headers/trace.o 00:02:40.299 CXX test/cpp_headers/trace_parser.o 00:02:40.299 CXX test/cpp_headers/tree.o 00:02:40.299 CXX test/cpp_headers/ublk.o 00:02:40.299 CXX test/cpp_headers/util.o 00:02:40.558 CC app/vhost/vhost.o 00:02:40.558 CXX test/cpp_headers/uuid.o 00:02:40.558 LINK lsvmd 00:02:40.558 CXX test/cpp_headers/version.o 00:02:40.558 CXX test/cpp_headers/vfio_user_pci.o 00:02:40.558 CXX test/cpp_headers/vfio_user_spec.o 00:02:40.558 CXX test/cpp_headers/vhost.o 00:02:40.558 CXX test/cpp_headers/vmd.o 00:02:40.558 CXX test/cpp_headers/xor.o 00:02:40.558 CXX test/cpp_headers/zipf.o 00:02:40.558 LINK spdk_nvme_perf 00:02:40.558 LINK led 00:02:40.558 LINK spdk_nvme_identify 00:02:40.558 LINK reactor_perf 00:02:40.558 LINK event_perf 00:02:40.558 LINK reactor 00:02:40.558 LINK vhost_fuzz 00:02:40.558 LINK mem_callbacks 00:02:40.558 LINK app_repeat 00:02:40.558 LINK hello_sock 00:02:40.558 LINK thread 00:02:40.817 LINK spdk_top 00:02:40.817 LINK idxd_perf 00:02:40.817 LINK vhost 00:02:40.817 LINK scheduler 00:02:40.817 CC test/nvme/reserve/reserve.o 00:02:40.817 CC test/nvme/fused_ordering/fused_ordering.o 00:02:40.817 CC test/nvme/e2edp/nvme_dp.o 00:02:40.817 CC test/nvme/connect_stress/connect_stress.o 00:02:40.817 CC test/nvme/startup/startup.o 00:02:40.817 CC test/nvme/boot_partition/boot_partition.o 00:02:40.817 CC test/nvme/err_injection/err_injection.o 00:02:40.818 CC test/nvme/reset/reset.o 00:02:40.818 CC test/nvme/overhead/overhead.o 00:02:40.818 CC test/nvme/cuse/cuse.o 00:02:40.818 CC test/nvme/sgl/sgl.o 00:02:40.818 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:40.818 CC test/nvme/compliance/nvme_compliance.o 00:02:40.818 CC test/nvme/fdp/fdp.o 00:02:40.818 CC test/nvme/simple_copy/simple_copy.o 00:02:40.818 CC test/nvme/aer/aer.o 00:02:41.077 CC test/blobfs/mkfs/mkfs.o 00:02:41.077 CC test/accel/dif/dif.o 00:02:41.077 CC test/lvol/esnap/esnap.o 00:02:41.077 CC examples/nvme/hello_world/hello_world.o 00:02:41.077 CC examples/nvme/hotplug/hotplug.o 00:02:41.077 CC examples/nvme/arbitration/arbitration.o 00:02:41.077 CC examples/nvme/abort/abort.o 00:02:41.077 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.077 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.077 CC examples/nvme/reconnect/reconnect.o 00:02:41.077 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.077 LINK boot_partition 00:02:41.077 CC examples/accel/perf/accel_perf.o 00:02:41.077 LINK connect_stress 00:02:41.077 LINK err_injection 00:02:41.077 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:41.336 LINK reserve 00:02:41.336 CC examples/blob/hello_world/hello_blob.o 00:02:41.336 LINK doorbell_aers 00:02:41.336 LINK fused_ordering 00:02:41.336 CC examples/blob/cli/blobcli.o 00:02:41.336 LINK startup 00:02:41.336 LINK simple_copy 00:02:41.336 LINK nvme_dp 00:02:41.336 LINK mkfs 00:02:41.336 LINK sgl 00:02:41.336 LINK overhead 00:02:41.336 LINK aer 00:02:41.336 LINK fdp 00:02:41.336 LINK nvme_compliance 00:02:41.336 LINK reset 00:02:41.336 LINK memory_ut 00:02:41.595 LINK pmr_persistence 00:02:41.595 LINK cmb_copy 00:02:41.595 LINK reconnect 00:02:41.595 LINK hello_blob 00:02:41.595 LINK hello_world 00:02:41.595 LINK hotplug 00:02:41.595 LINK abort 00:02:41.595 LINK hello_fsdev 00:02:41.595 LINK arbitration 00:02:41.595 LINK nvme_manage 00:02:41.852 LINK blobcli 00:02:41.852 LINK accel_perf 00:02:41.852 LINK dif 00:02:42.110 LINK iscsi_fuzz 00:02:42.368 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.368 CC examples/bdev/bdevperf/bdevperf.o 00:02:42.368 CC test/bdev/bdevio/bdevio.o 00:02:42.626 LINK cuse 00:02:42.626 LINK hello_bdev 00:02:42.626 LINK bdevio 00:02:43.193 LINK bdevperf 00:02:43.451 CC examples/nvmf/nvmf/nvmf.o 00:02:43.709 LINK nvmf 00:02:46.993 LINK esnap 00:02:46.993 00:02:46.993 real 1m10.191s 00:02:46.993 user 11m55.642s 00:02:46.993 sys 2m36.792s 00:02:46.993 07:04:49 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:46.993 07:04:49 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.993 ************************************ 00:02:46.993 END TEST make 00:02:46.993 ************************************ 00:02:46.993 07:04:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.993 07:04:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.993 07:04:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.993 07:04:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.993 07:04:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.993 07:04:50 -- pm/common@44 -- $ pid=2301863 00:02:46.993 07:04:50 -- pm/common@50 -- $ kill -TERM 2301863 00:02:46.993 07:04:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.993 07:04:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.993 07:04:50 -- pm/common@44 -- $ pid=2301864 00:02:46.993 07:04:50 -- pm/common@50 -- $ kill -TERM 2301864 00:02:46.993 07:04:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.993 07:04:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.993 07:04:50 -- pm/common@44 -- $ pid=2301867 00:02:46.993 07:04:50 -- pm/common@50 -- $ kill -TERM 2301867 00:02:46.994 07:04:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.994 07:04:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.994 07:04:50 -- pm/common@44 -- $ pid=2301896 00:02:46.994 07:04:50 -- pm/common@50 -- $ sudo -E kill -TERM 2301896 00:02:46.994 07:04:50 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:46.994 07:04:50 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:46.994 07:04:50 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:46.994 07:04:50 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:46.994 07:04:50 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:46.994 07:04:50 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:46.994 07:04:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:46.994 07:04:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:46.994 07:04:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:46.994 07:04:50 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.994 07:04:50 -- scripts/common.sh@336 -- # read -ra ver1 00:02:46.994 07:04:50 -- scripts/common.sh@337 -- # IFS=.-: 00:02:46.994 07:04:50 -- scripts/common.sh@337 -- # read -ra ver2 00:02:46.994 07:04:50 -- scripts/common.sh@338 -- # local 'op=<' 00:02:46.994 07:04:50 -- scripts/common.sh@340 -- # ver1_l=2 00:02:46.994 07:04:50 -- scripts/common.sh@341 -- # ver2_l=1 00:02:46.994 07:04:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:46.994 07:04:50 -- scripts/common.sh@344 -- # case "$op" in 00:02:46.994 07:04:50 -- scripts/common.sh@345 -- # : 1 00:02:46.994 07:04:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:46.994 07:04:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.994 07:04:50 -- scripts/common.sh@365 -- # decimal 1 00:02:46.994 07:04:50 -- scripts/common.sh@353 -- # local d=1 00:02:46.994 07:04:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.994 07:04:50 -- scripts/common.sh@355 -- # echo 1 00:02:46.994 07:04:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:46.994 07:04:50 -- scripts/common.sh@366 -- # decimal 2 00:02:46.994 07:04:50 -- scripts/common.sh@353 -- # local d=2 00:02:46.994 07:04:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.994 07:04:50 -- scripts/common.sh@355 -- # echo 2 00:02:46.994 07:04:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:46.994 07:04:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:46.994 07:04:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:46.994 07:04:50 -- scripts/common.sh@368 -- # return 0 00:02:46.994 07:04:50 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.994 07:04:50 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:46.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.994 --rc genhtml_branch_coverage=1 00:02:46.994 --rc genhtml_function_coverage=1 00:02:46.994 --rc genhtml_legend=1 00:02:46.994 --rc geninfo_all_blocks=1 00:02:46.994 --rc geninfo_unexecuted_blocks=1 00:02:46.994 00:02:46.994 ' 00:02:46.994 07:04:50 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:46.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.994 --rc genhtml_branch_coverage=1 00:02:46.994 --rc genhtml_function_coverage=1 00:02:46.994 --rc genhtml_legend=1 00:02:46.994 --rc geninfo_all_blocks=1 00:02:46.994 --rc geninfo_unexecuted_blocks=1 00:02:46.994 00:02:46.994 ' 00:02:46.994 07:04:50 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:46.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.994 --rc genhtml_branch_coverage=1 00:02:46.994 --rc genhtml_function_coverage=1 00:02:46.994 --rc genhtml_legend=1 00:02:46.994 --rc geninfo_all_blocks=1 00:02:46.994 --rc geninfo_unexecuted_blocks=1 00:02:46.994 00:02:46.994 ' 00:02:46.994 07:04:50 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:46.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.994 --rc genhtml_branch_coverage=1 00:02:46.994 --rc genhtml_function_coverage=1 00:02:46.994 --rc genhtml_legend=1 00:02:46.994 --rc geninfo_all_blocks=1 00:02:46.994 --rc geninfo_unexecuted_blocks=1 00:02:46.994 00:02:46.994 ' 00:02:46.994 07:04:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.994 07:04:50 -- nvmf/common.sh@7 -- # uname -s 00:02:46.994 07:04:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.994 07:04:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.994 07:04:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.994 07:04:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.994 07:04:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.994 07:04:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.994 07:04:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.994 07:04:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.994 07:04:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:46.994 07:04:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:46.994 07:04:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:46.994 07:04:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:46.994 07:04:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:46.994 07:04:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:46.994 07:04:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:46.994 07:04:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:46.994 07:04:50 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:46.994 07:04:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:46.994 07:04:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:46.994 07:04:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.994 07:04:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.994 07:04:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.994 07:04:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.994 07:04:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.994 07:04:50 -- paths/export.sh@5 -- # export PATH 00:02:46.994 07:04:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.994 07:04:50 -- nvmf/common.sh@51 -- # : 0 00:02:46.994 07:04:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:46.994 07:04:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:46.994 07:04:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:46.994 07:04:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:46.994 07:04:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:46.994 07:04:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:46.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:46.994 07:04:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:46.994 07:04:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:46.994 07:04:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:46.994 07:04:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:46.994 07:04:50 -- spdk/autotest.sh@32 -- # uname -s 00:02:46.994 07:04:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:46.994 07:04:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:46.994 07:04:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:46.994 07:04:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:46.994 07:04:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:46.994 07:04:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:46.994 07:04:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:46.994 07:04:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:46.994 07:04:50 -- spdk/autotest.sh@48 -- # udevadm_pid=2361319 00:02:46.994 07:04:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:46.994 07:04:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:46.994 07:04:50 -- pm/common@17 -- # local monitor 00:02:46.994 07:04:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.994 07:04:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.994 07:04:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.994 07:04:50 -- pm/common@21 -- # date +%s 00:02:46.994 07:04:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.994 07:04:50 -- pm/common@21 -- # date +%s 00:02:46.994 07:04:50 -- pm/common@25 -- # sleep 1 00:02:46.994 07:04:50 -- pm/common@21 -- # date +%s 00:02:46.994 07:04:50 -- pm/common@21 -- # date +%s 00:02:46.994 07:04:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082690 00:02:46.994 07:04:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082690 00:02:46.994 07:04:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082690 00:02:46.994 07:04:50 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082690 00:02:46.994 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082690_collect-cpu-load.pm.log 00:02:46.995 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082690_collect-vmstat.pm.log 00:02:46.995 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082690_collect-cpu-temp.pm.log 00:02:46.995 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082690_collect-bmc-pm.bmc.pm.log 00:02:47.933 07:04:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.933 07:04:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:47.933 07:04:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:47.933 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:02:47.933 07:04:51 -- spdk/autotest.sh@59 -- # create_test_list 00:02:47.933 07:04:51 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:47.933 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:02:47.933 07:04:51 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:47.933 07:04:51 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.933 07:04:51 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.933 07:04:51 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:47.933 07:04:51 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.933 07:04:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:47.933 07:04:51 -- common/autotest_common.sh@1455 -- # uname 00:02:47.933 07:04:51 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:47.933 07:04:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:47.934 07:04:51 -- common/autotest_common.sh@1475 -- # uname 00:02:47.934 07:04:51 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:47.934 07:04:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:47.934 07:04:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:47.934 lcov: LCOV version 1.15 00:02:47.934 07:04:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:06.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:06.031 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:28.029 07:05:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:28.029 07:05:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:28.029 07:05:27 -- common/autotest_common.sh@10 -- # set +x 00:03:28.029 07:05:27 -- spdk/autotest.sh@78 -- # rm -f 00:03:28.030 07:05:27 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.030 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:28.030 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:28.030 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:28.030 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:28.030 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:28.030 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:28.030 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:28.030 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:28.030 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:28.030 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:28.030 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:28.030 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:28.030 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:28.030 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:28.030 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:28.030 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:28.030 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:28.030 07:05:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:28.030 07:05:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:28.030 07:05:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:28.030 07:05:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:28.030 07:05:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:28.030 07:05:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:28.030 07:05:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:28.030 07:05:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:28.030 07:05:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:28.030 07:05:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:28.030 07:05:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:28.030 07:05:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:28.030 07:05:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:28.030 07:05:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:28.030 07:05:29 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:28.030 No valid GPT data, bailing 00:03:28.030 07:05:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:28.030 07:05:29 -- scripts/common.sh@394 -- # pt= 00:03:28.030 07:05:29 -- scripts/common.sh@395 -- # return 1 00:03:28.030 07:05:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:28.030 1+0 records in 00:03:28.030 1+0 records out 00:03:28.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00169545 s, 618 MB/s 00:03:28.030 07:05:29 -- spdk/autotest.sh@105 -- # sync 00:03:28.030 07:05:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:28.030 07:05:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:28.030 07:05:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:28.291 07:05:31 -- spdk/autotest.sh@111 -- # uname -s 00:03:28.291 07:05:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:28.291 07:05:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:28.291 07:05:31 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.667 Hugepages 00:03:29.667 node hugesize free / total 00:03:29.667 node0 1048576kB 0 / 0 00:03:29.667 node0 2048kB 0 / 0 00:03:29.667 node1 1048576kB 0 / 0 00:03:29.667 node1 2048kB 0 / 0 00:03:29.667 00:03:29.667 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.667 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:29.667 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:29.667 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:29.667 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:29.667 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:29.667 07:05:32 -- spdk/autotest.sh@117 -- # uname -s 00:03:29.667 07:05:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:29.667 07:05:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:29.667 07:05:32 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.056 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:31.056 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:31.056 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:31.994 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:31.994 07:05:35 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:32.932 07:05:36 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:32.932 07:05:36 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:32.932 07:05:36 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.932 07:05:36 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:32.932 07:05:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:32.932 07:05:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:32.932 07:05:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.932 07:05:36 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.932 07:05:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:33.192 07:05:36 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:33.192 07:05:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:03:33.192 07:05:36 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.566 Waiting for block devices as requested 00:03:34.566 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:34.566 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:34.566 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:34.566 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:34.566 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:34.825 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:34.825 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:34.825 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:35.085 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:35.085 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:35.085 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:35.345 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:35.345 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:35.345 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:35.604 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:35.604 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:35.604 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:35.862 07:05:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:35.862 07:05:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1485 -- # grep 0000:0b:00.0/nvme/nvme 00:03:35.862 07:05:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:35.862 07:05:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:35.862 07:05:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:35.862 07:05:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:35.862 07:05:39 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:03:35.862 07:05:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:35.862 07:05:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:35.862 07:05:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:35.862 07:05:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:35.862 07:05:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:35.862 07:05:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:35.862 07:05:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:35.862 07:05:39 -- common/autotest_common.sh@1541 -- # continue 00:03:35.862 07:05:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:35.862 07:05:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:35.862 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:03:35.862 07:05:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:35.862 07:05:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:35.862 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:03:35.862 07:05:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.244 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:37.244 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:37.244 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:38.228 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.228 07:05:41 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:38.228 07:05:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:38.228 07:05:41 -- common/autotest_common.sh@10 -- # set +x 00:03:38.228 07:05:41 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:38.228 07:05:41 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:38.487 07:05:41 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:38.487 07:05:41 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:38.487 07:05:41 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:38.487 07:05:41 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:38.487 07:05:41 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:38.487 07:05:41 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:38.487 07:05:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:38.487 07:05:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:38.487 07:05:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.487 07:05:41 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.487 07:05:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:38.487 07:05:41 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:38.487 07:05:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:03:38.487 07:05:41 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:38.487 07:05:41 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:38.487 07:05:41 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:38.487 07:05:41 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:38.487 07:05:41 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:38.487 07:05:41 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:38.487 07:05:41 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:0b:00.0 00:03:38.487 07:05:41 -- common/autotest_common.sh@1577 -- # [[ -z 0000:0b:00.0 ]] 00:03:38.487 07:05:41 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2371701 00:03:38.487 07:05:41 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.487 07:05:41 -- common/autotest_common.sh@1583 -- # waitforlisten 2371701 00:03:38.487 07:05:41 -- common/autotest_common.sh@833 -- # '[' -z 2371701 ']' 00:03:38.487 07:05:41 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.487 07:05:41 -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:38.487 07:05:41 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.487 07:05:41 -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:38.487 07:05:41 -- common/autotest_common.sh@10 -- # set +x 00:03:38.487 [2024-11-20 07:05:41.780990] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:03:38.487 [2024-11-20 07:05:41.781091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371701 ] 00:03:38.487 [2024-11-20 07:05:41.847799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.487 [2024-11-20 07:05:41.902847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.745 07:05:42 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:38.745 07:05:42 -- common/autotest_common.sh@866 -- # return 0 00:03:38.745 07:05:42 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:38.745 07:05:42 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:38.745 07:05:42 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:42.027 nvme0n1 00:03:42.027 07:05:45 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:42.283 [2024-11-20 07:05:45.507238] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:42.283 [2024-11-20 07:05:45.507278] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:42.283 request: 00:03:42.283 { 00:03:42.283 "nvme_ctrlr_name": "nvme0", 00:03:42.283 "password": "test", 00:03:42.283 "method": "bdev_nvme_opal_revert", 00:03:42.283 "req_id": 1 00:03:42.283 } 00:03:42.283 Got JSON-RPC error response 00:03:42.283 response: 00:03:42.283 { 00:03:42.283 "code": -32603, 00:03:42.283 "message": "Internal error" 00:03:42.283 } 00:03:42.283 07:05:45 -- common/autotest_common.sh@1589 -- # true 00:03:42.284 07:05:45 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:42.284 07:05:45 -- common/autotest_common.sh@1593 -- # killprocess 2371701 00:03:42.284 07:05:45 -- common/autotest_common.sh@952 -- # '[' -z 2371701 ']' 00:03:42.284 07:05:45 -- common/autotest_common.sh@956 -- # kill -0 2371701 00:03:42.284 07:05:45 -- common/autotest_common.sh@957 -- # uname 00:03:42.284 07:05:45 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:42.284 07:05:45 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2371701 00:03:42.284 07:05:45 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:42.284 07:05:45 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:42.284 07:05:45 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2371701' 00:03:42.284 killing process with pid 2371701 00:03:42.284 07:05:45 -- common/autotest_common.sh@971 -- # kill 2371701 00:03:42.284 07:05:45 -- common/autotest_common.sh@976 -- # wait 2371701 00:03:44.180 07:05:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:44.180 07:05:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:44.180 07:05:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:44.180 07:05:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:44.180 07:05:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:44.180 07:05:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.180 07:05:47 -- common/autotest_common.sh@10 -- # set +x 00:03:44.180 07:05:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:44.180 07:05:47 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.180 07:05:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.180 07:05:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.180 07:05:47 -- common/autotest_common.sh@10 -- # set +x 00:03:44.180 ************************************ 00:03:44.180 START TEST env 00:03:44.180 ************************************ 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.180 * Looking for test storage... 00:03:44.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:44.180 07:05:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.180 07:05:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.180 07:05:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.180 07:05:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.180 07:05:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.180 07:05:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.180 07:05:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.180 07:05:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.180 07:05:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.180 07:05:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.180 07:05:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.180 07:05:47 env -- scripts/common.sh@344 -- # case "$op" in 00:03:44.180 07:05:47 env -- scripts/common.sh@345 -- # : 1 00:03:44.180 07:05:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.180 07:05:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.180 07:05:47 env -- scripts/common.sh@365 -- # decimal 1 00:03:44.180 07:05:47 env -- scripts/common.sh@353 -- # local d=1 00:03:44.180 07:05:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.180 07:05:47 env -- scripts/common.sh@355 -- # echo 1 00:03:44.180 07:05:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.180 07:05:47 env -- scripts/common.sh@366 -- # decimal 2 00:03:44.180 07:05:47 env -- scripts/common.sh@353 -- # local d=2 00:03:44.180 07:05:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.180 07:05:47 env -- scripts/common.sh@355 -- # echo 2 00:03:44.180 07:05:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.180 07:05:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.180 07:05:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.180 07:05:47 env -- scripts/common.sh@368 -- # return 0 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.180 --rc genhtml_branch_coverage=1 00:03:44.180 --rc genhtml_function_coverage=1 00:03:44.180 --rc genhtml_legend=1 00:03:44.180 --rc geninfo_all_blocks=1 00:03:44.180 --rc geninfo_unexecuted_blocks=1 00:03:44.180 00:03:44.180 ' 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.180 --rc genhtml_branch_coverage=1 00:03:44.180 --rc genhtml_function_coverage=1 00:03:44.180 --rc genhtml_legend=1 00:03:44.180 --rc geninfo_all_blocks=1 00:03:44.180 --rc geninfo_unexecuted_blocks=1 00:03:44.180 00:03:44.180 ' 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.180 --rc genhtml_branch_coverage=1 00:03:44.180 --rc genhtml_function_coverage=1 00:03:44.180 --rc genhtml_legend=1 00:03:44.180 --rc geninfo_all_blocks=1 00:03:44.180 --rc geninfo_unexecuted_blocks=1 00:03:44.180 00:03:44.180 ' 00:03:44.180 07:05:47 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:44.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.180 --rc genhtml_branch_coverage=1 00:03:44.180 --rc genhtml_function_coverage=1 00:03:44.180 --rc genhtml_legend=1 00:03:44.180 --rc geninfo_all_blocks=1 00:03:44.181 --rc geninfo_unexecuted_blocks=1 00:03:44.181 00:03:44.181 ' 00:03:44.181 07:05:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.181 07:05:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.181 07:05:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.181 07:05:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.181 ************************************ 00:03:44.181 START TEST env_memory 00:03:44.181 ************************************ 00:03:44.181 07:05:47 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.181 00:03:44.181 00:03:44.181 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.181 http://cunit.sourceforge.net/ 00:03:44.181 00:03:44.181 00:03:44.181 Suite: memory 00:03:44.181 Test: alloc and free memory map ...[2024-11-20 07:05:47.549963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.181 passed 00:03:44.181 Test: mem map translation ...[2024-11-20 07:05:47.571279] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.181 [2024-11-20 07:05:47.571305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.181 [2024-11-20 07:05:47.571351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.181 [2024-11-20 07:05:47.571362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.181 passed 00:03:44.439 Test: mem map registration ...[2024-11-20 07:05:47.613816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:44.439 [2024-11-20 07:05:47.613847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:44.439 passed 00:03:44.439 Test: mem map adjacent registrations ...passed 00:03:44.439 00:03:44.439 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.439 suites 1 1 n/a 0 0 00:03:44.439 tests 4 4 4 0 0 00:03:44.439 asserts 152 152 152 0 n/a 00:03:44.439 00:03:44.439 Elapsed time = 0.146 seconds 00:03:44.439 00:03:44.439 real 0m0.155s 00:03:44.439 user 0m0.147s 00:03:44.439 sys 0m0.008s 00:03:44.439 07:05:47 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.439 07:05:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:44.439 ************************************ 00:03:44.439 END TEST env_memory 00:03:44.439 ************************************ 00:03:44.439 07:05:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.439 07:05:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.439 07:05:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.439 07:05:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.439 ************************************ 00:03:44.439 START TEST env_vtophys 00:03:44.439 ************************************ 00:03:44.439 07:05:47 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.439 EAL: lib.eal log level changed from notice to debug 00:03:44.439 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.439 EAL: Detected lcore 1 as core 1 on socket 0 00:03:44.439 EAL: Detected lcore 2 as core 2 on socket 0 00:03:44.439 EAL: Detected lcore 3 as core 3 on socket 0 00:03:44.439 EAL: Detected lcore 4 as core 4 on socket 0 00:03:44.439 EAL: Detected lcore 5 as core 5 on socket 0 00:03:44.439 EAL: Detected lcore 6 as core 8 on socket 0 00:03:44.439 EAL: Detected lcore 7 as core 9 on socket 0 00:03:44.439 EAL: Detected lcore 8 as core 10 on socket 0 00:03:44.439 EAL: Detected lcore 9 as core 11 on socket 0 00:03:44.439 EAL: Detected lcore 10 as core 12 on socket 0 00:03:44.439 EAL: Detected lcore 11 as core 13 on socket 0 00:03:44.439 EAL: Detected lcore 12 as core 0 on socket 1 00:03:44.439 EAL: Detected lcore 13 as core 1 on socket 1 00:03:44.439 EAL: Detected lcore 14 as core 2 on socket 1 00:03:44.439 EAL: Detected lcore 15 as core 3 on socket 1 00:03:44.439 EAL: Detected lcore 16 as core 4 on socket 1 00:03:44.439 EAL: Detected lcore 17 as core 5 on socket 1 00:03:44.439 EAL: Detected lcore 18 as core 8 on socket 1 00:03:44.439 EAL: Detected lcore 19 as core 9 on socket 1 00:03:44.439 EAL: Detected lcore 20 as core 10 on socket 1 00:03:44.439 EAL: Detected lcore 21 as core 11 on socket 1 00:03:44.439 EAL: Detected lcore 22 as core 12 on socket 1 00:03:44.439 EAL: Detected lcore 23 as core 13 on socket 1 00:03:44.439 EAL: Detected lcore 24 as core 0 on socket 0 00:03:44.439 EAL: Detected lcore 25 as core 1 on socket 0 00:03:44.439 EAL: Detected lcore 26 as core 2 on socket 0 00:03:44.439 EAL: Detected lcore 27 as core 3 on socket 0 00:03:44.439 EAL: Detected lcore 28 as core 4 on socket 0 00:03:44.439 EAL: Detected lcore 29 as core 5 on socket 0 00:03:44.439 EAL: Detected lcore 30 as core 8 on socket 0 00:03:44.439 EAL: Detected lcore 31 as core 9 on socket 0 00:03:44.439 EAL: Detected lcore 32 as core 10 on socket 0 00:03:44.439 EAL: Detected lcore 33 as core 11 on socket 0 00:03:44.439 EAL: Detected lcore 34 as core 12 on socket 0 00:03:44.439 EAL: Detected lcore 35 as core 13 on socket 0 00:03:44.439 EAL: Detected lcore 36 as core 0 on socket 1 00:03:44.439 EAL: Detected lcore 37 as core 1 on socket 1 00:03:44.439 EAL: Detected lcore 38 as core 2 on socket 1 00:03:44.439 EAL: Detected lcore 39 as core 3 on socket 1 00:03:44.439 EAL: Detected lcore 40 as core 4 on socket 1 00:03:44.439 EAL: Detected lcore 41 as core 5 on socket 1 00:03:44.439 EAL: Detected lcore 42 as core 8 on socket 1 00:03:44.439 EAL: Detected lcore 43 as core 9 on socket 1 00:03:44.439 EAL: Detected lcore 44 as core 10 on socket 1 00:03:44.439 EAL: Detected lcore 45 as core 11 on socket 1 00:03:44.439 EAL: Detected lcore 46 as core 12 on socket 1 00:03:44.440 EAL: Detected lcore 47 as core 13 on socket 1 00:03:44.440 EAL: Maximum logical cores by configuration: 128 00:03:44.440 EAL: Detected CPU lcores: 48 00:03:44.440 EAL: Detected NUMA nodes: 2 00:03:44.440 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:44.440 EAL: Detected shared linkage of DPDK 00:03:44.440 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.440 EAL: Bus pci wants IOVA as 'DC' 00:03:44.440 EAL: Buses did not request a specific IOVA mode. 00:03:44.440 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:44.440 EAL: Selected IOVA mode 'VA' 00:03:44.440 EAL: Probing VFIO support... 00:03:44.440 EAL: IOMMU type 1 (Type 1) is supported 00:03:44.440 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:44.440 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:44.440 EAL: VFIO support initialized 00:03:44.440 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.440 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:44.440 EAL: Setting up physically contiguous memory... 00:03:44.440 EAL: Setting maximum number of open files to 524288 00:03:44.440 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:44.440 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:44.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:44.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:44.440 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.440 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:44.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.440 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.440 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:44.440 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:44.440 EAL: Hugepages will be freed exactly as allocated. 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: TSC frequency is ~2700000 KHz 00:03:44.440 EAL: Main lcore 0 is ready (tid=7f3098d85a00;cpuset=[0]) 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 0 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 2MB 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:44.440 EAL: Mem event callback 'spdk:(nil)' registered 00:03:44.440 00:03:44.440 00:03:44.440 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.440 http://cunit.sourceforge.net/ 00:03:44.440 00:03:44.440 00:03:44.440 Suite: components_suite 00:03:44.440 Test: vtophys_malloc_test ...passed 00:03:44.440 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 4 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 4 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 4 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 4 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 4 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.440 EAL: Restoring previous memory policy: 4 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.440 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.440 EAL: request: mp_malloc_sync 00:03:44.440 EAL: No shared files mode enabled, IPC is disabled 00:03:44.440 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.440 EAL: Trying to obtain current memory policy. 00:03:44.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.698 EAL: Restoring previous memory policy: 4 00:03:44.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.698 EAL: request: mp_malloc_sync 00:03:44.698 EAL: No shared files mode enabled, IPC is disabled 00:03:44.698 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.698 EAL: request: mp_malloc_sync 00:03:44.698 EAL: No shared files mode enabled, IPC is disabled 00:03:44.698 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.698 EAL: Trying to obtain current memory policy. 00:03:44.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.698 EAL: Restoring previous memory policy: 4 00:03:44.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.698 EAL: request: mp_malloc_sync 00:03:44.698 EAL: No shared files mode enabled, IPC is disabled 00:03:44.698 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.698 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.698 EAL: request: mp_malloc_sync 00:03:44.698 EAL: No shared files mode enabled, IPC is disabled 00:03:44.698 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.698 EAL: Trying to obtain current memory policy. 00:03:44.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.956 EAL: Restoring previous memory policy: 4 00:03:44.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.956 EAL: request: mp_malloc_sync 00:03:44.956 EAL: No shared files mode enabled, IPC is disabled 00:03:44.956 EAL: Heap on socket 0 was expanded by 514MB 00:03:44.956 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.215 EAL: request: mp_malloc_sync 00:03:45.215 EAL: No shared files mode enabled, IPC is disabled 00:03:45.215 EAL: Heap on socket 0 was shrunk by 514MB 00:03:45.215 EAL: Trying to obtain current memory policy. 00:03:45.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.472 EAL: Restoring previous memory policy: 4 00:03:45.472 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.472 EAL: request: mp_malloc_sync 00:03:45.472 EAL: No shared files mode enabled, IPC is disabled 00:03:45.472 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.730 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.989 EAL: request: mp_malloc_sync 00:03:45.989 EAL: No shared files mode enabled, IPC is disabled 00:03:45.989 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.989 passed 00:03:45.989 00:03:45.989 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.989 suites 1 1 n/a 0 0 00:03:45.989 tests 2 2 2 0 0 00:03:45.989 asserts 497 497 497 0 n/a 00:03:45.989 00:03:45.989 Elapsed time = 1.345 seconds 00:03:45.989 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.989 EAL: request: mp_malloc_sync 00:03:45.989 EAL: No shared files mode enabled, IPC is disabled 00:03:45.989 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.989 EAL: No shared files mode enabled, IPC is disabled 00:03:45.989 EAL: No shared files mode enabled, IPC is disabled 00:03:45.989 EAL: No shared files mode enabled, IPC is disabled 00:03:45.989 00:03:45.989 real 0m1.466s 00:03:45.989 user 0m0.866s 00:03:45.989 sys 0m0.567s 00:03:45.989 07:05:49 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.989 07:05:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.989 ************************************ 00:03:45.989 END TEST env_vtophys 00:03:45.989 ************************************ 00:03:45.989 07:05:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.989 07:05:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:45.989 07:05:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.989 07:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.989 ************************************ 00:03:45.989 START TEST env_pci 00:03:45.989 ************************************ 00:03:45.989 07:05:49 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.989 00:03:45.989 00:03:45.989 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.989 http://cunit.sourceforge.net/ 00:03:45.989 00:03:45.989 00:03:45.989 Suite: pci 00:03:45.990 Test: pci_hook ...[2024-11-20 07:05:49.245838] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2372601 has claimed it 00:03:45.990 EAL: Cannot find device (10000:00:01.0) 00:03:45.990 EAL: Failed to attach device on primary process 00:03:45.990 passed 00:03:45.990 00:03:45.990 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.990 suites 1 1 n/a 0 0 00:03:45.990 tests 1 1 1 0 0 00:03:45.990 asserts 25 25 25 0 n/a 00:03:45.990 00:03:45.990 Elapsed time = 0.022 seconds 00:03:45.990 00:03:45.990 real 0m0.036s 00:03:45.990 user 0m0.013s 00:03:45.990 sys 0m0.023s 00:03:45.990 07:05:49 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:45.990 07:05:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.990 ************************************ 00:03:45.990 END TEST env_pci 00:03:45.990 ************************************ 00:03:45.990 07:05:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.990 07:05:49 env -- env/env.sh@15 -- # uname 00:03:45.990 07:05:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.990 07:05:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.990 07:05:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.990 07:05:49 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:45.990 07:05:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:45.990 07:05:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.990 ************************************ 00:03:45.990 START TEST env_dpdk_post_init 00:03:45.990 ************************************ 00:03:45.990 07:05:49 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.990 EAL: Detected CPU lcores: 48 00:03:45.990 EAL: Detected NUMA nodes: 2 00:03:45.990 EAL: Detected shared linkage of DPDK 00:03:45.990 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.990 EAL: Selected IOVA mode 'VA' 00:03:45.990 EAL: VFIO support initialized 00:03:45.990 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.249 EAL: Using IOMMU type 1 (Type 1) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:46.249 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:47.187 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:47.187 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:47.187 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:47.187 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:47.187 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:47.187 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:47.187 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:47.188 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:47.188 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:50.470 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:50.470 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:50.470 Starting DPDK initialization... 00:03:50.470 Starting SPDK post initialization... 00:03:50.470 SPDK NVMe probe 00:03:50.470 Attaching to 0000:0b:00.0 00:03:50.470 Attached to 0000:0b:00.0 00:03:50.470 Cleaning up... 00:03:50.470 00:03:50.470 real 0m4.399s 00:03:50.470 user 0m3.002s 00:03:50.470 sys 0m0.456s 00:03:50.470 07:05:53 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.470 07:05:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:50.470 ************************************ 00:03:50.470 END TEST env_dpdk_post_init 00:03:50.470 ************************************ 00:03:50.470 07:05:53 env -- env/env.sh@26 -- # uname 00:03:50.470 07:05:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:50.470 07:05:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.470 07:05:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.470 07:05:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.470 07:05:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.470 ************************************ 00:03:50.470 START TEST env_mem_callbacks 00:03:50.470 ************************************ 00:03:50.470 07:05:53 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:50.470 EAL: Detected CPU lcores: 48 00:03:50.470 EAL: Detected NUMA nodes: 2 00:03:50.470 EAL: Detected shared linkage of DPDK 00:03:50.470 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:50.470 EAL: Selected IOVA mode 'VA' 00:03:50.470 EAL: VFIO support initialized 00:03:50.470 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:50.470 00:03:50.470 00:03:50.470 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.470 http://cunit.sourceforge.net/ 00:03:50.470 00:03:50.470 00:03:50.470 Suite: memory 00:03:50.470 Test: test ... 00:03:50.470 register 0x200000200000 2097152 00:03:50.470 malloc 3145728 00:03:50.470 register 0x200000400000 4194304 00:03:50.470 buf 0x200000500000 len 3145728 PASSED 00:03:50.470 malloc 64 00:03:50.470 buf 0x2000004fff40 len 64 PASSED 00:03:50.470 malloc 4194304 00:03:50.470 register 0x200000800000 6291456 00:03:50.470 buf 0x200000a00000 len 4194304 PASSED 00:03:50.470 free 0x200000500000 3145728 00:03:50.470 free 0x2000004fff40 64 00:03:50.470 unregister 0x200000400000 4194304 PASSED 00:03:50.470 free 0x200000a00000 4194304 00:03:50.470 unregister 0x200000800000 6291456 PASSED 00:03:50.470 malloc 8388608 00:03:50.470 register 0x200000400000 10485760 00:03:50.470 buf 0x200000600000 len 8388608 PASSED 00:03:50.470 free 0x200000600000 8388608 00:03:50.470 unregister 0x200000400000 10485760 PASSED 00:03:50.470 passed 00:03:50.470 00:03:50.470 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.470 suites 1 1 n/a 0 0 00:03:50.470 tests 1 1 1 0 0 00:03:50.470 asserts 15 15 15 0 n/a 00:03:50.470 00:03:50.470 Elapsed time = 0.004 seconds 00:03:50.470 00:03:50.470 real 0m0.047s 00:03:50.470 user 0m0.015s 00:03:50.470 sys 0m0.031s 00:03:50.470 07:05:53 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.470 07:05:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:50.470 ************************************ 00:03:50.470 END TEST env_mem_callbacks 00:03:50.470 ************************************ 00:03:50.470 00:03:50.470 real 0m6.491s 00:03:50.470 user 0m4.229s 00:03:50.470 sys 0m1.309s 00:03:50.470 07:05:53 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:50.470 07:05:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.470 ************************************ 00:03:50.470 END TEST env 00:03:50.470 ************************************ 00:03:50.470 07:05:53 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:50.470 07:05:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:50.470 07:05:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:50.470 07:05:53 -- common/autotest_common.sh@10 -- # set +x 00:03:50.470 ************************************ 00:03:50.470 START TEST rpc 00:03:50.470 ************************************ 00:03:50.470 07:05:53 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:50.729 * Looking for test storage... 00:03:50.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.729 07:05:53 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:50.729 07:05:53 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:50.729 07:05:53 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.729 07:05:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.729 07:05:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.729 07:05:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.729 07:05:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.729 07:05:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.729 07:05:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.729 07:05:54 rpc -- scripts/common.sh@345 -- # : 1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.729 07:05:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.729 07:05:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.729 07:05:54 rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.729 07:05:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.729 07:05:54 rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.729 07:05:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.729 07:05:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.729 07:05:54 rpc -- scripts/common.sh@368 -- # return 0 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:50.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.729 --rc genhtml_branch_coverage=1 00:03:50.729 --rc genhtml_function_coverage=1 00:03:50.729 --rc genhtml_legend=1 00:03:50.729 --rc geninfo_all_blocks=1 00:03:50.729 --rc geninfo_unexecuted_blocks=1 00:03:50.729 00:03:50.729 ' 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:50.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.729 --rc genhtml_branch_coverage=1 00:03:50.729 --rc genhtml_function_coverage=1 00:03:50.729 --rc genhtml_legend=1 00:03:50.729 --rc geninfo_all_blocks=1 00:03:50.729 --rc geninfo_unexecuted_blocks=1 00:03:50.729 00:03:50.729 ' 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:50.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.729 --rc genhtml_branch_coverage=1 00:03:50.729 --rc genhtml_function_coverage=1 00:03:50.729 --rc genhtml_legend=1 00:03:50.729 --rc geninfo_all_blocks=1 00:03:50.729 --rc geninfo_unexecuted_blocks=1 00:03:50.729 00:03:50.729 ' 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:50.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.729 --rc genhtml_branch_coverage=1 00:03:50.729 --rc genhtml_function_coverage=1 00:03:50.729 --rc genhtml_legend=1 00:03:50.729 --rc geninfo_all_blocks=1 00:03:50.729 --rc geninfo_unexecuted_blocks=1 00:03:50.729 00:03:50.729 ' 00:03:50.729 07:05:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2373367 00:03:50.729 07:05:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:50.729 07:05:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.729 07:05:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2373367 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@833 -- # '[' -z 2373367 ']' 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:50.729 07:05:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.729 [2024-11-20 07:05:54.071070] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:03:50.729 [2024-11-20 07:05:54.071156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373367 ] 00:03:50.729 [2024-11-20 07:05:54.137845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.987 [2024-11-20 07:05:54.196760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.987 [2024-11-20 07:05:54.196811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2373367' to capture a snapshot of events at runtime. 00:03:50.987 [2024-11-20 07:05:54.196840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.987 [2024-11-20 07:05:54.196851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.987 [2024-11-20 07:05:54.196860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2373367 for offline analysis/debug. 00:03:50.987 [2024-11-20 07:05:54.197438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.247 07:05:54 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:51.247 07:05:54 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:51.247 07:05:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.247 07:05:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.247 07:05:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:51.247 07:05:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:51.247 07:05:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.247 07:05:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.247 07:05:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.247 ************************************ 00:03:51.247 START TEST rpc_integrity 00:03:51.247 ************************************ 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.247 { 00:03:51.247 "name": "Malloc0", 00:03:51.247 "aliases": [ 00:03:51.247 "a92660c0-2482-43a1-a508-d63b4a92bf24" 00:03:51.247 ], 00:03:51.247 "product_name": "Malloc disk", 00:03:51.247 "block_size": 512, 00:03:51.247 "num_blocks": 16384, 00:03:51.247 "uuid": "a92660c0-2482-43a1-a508-d63b4a92bf24", 00:03:51.247 "assigned_rate_limits": { 00:03:51.247 "rw_ios_per_sec": 0, 00:03:51.247 "rw_mbytes_per_sec": 0, 00:03:51.247 "r_mbytes_per_sec": 0, 00:03:51.247 "w_mbytes_per_sec": 0 00:03:51.247 }, 00:03:51.247 "claimed": false, 00:03:51.247 "zoned": false, 00:03:51.247 "supported_io_types": { 00:03:51.247 "read": true, 00:03:51.247 "write": true, 00:03:51.247 "unmap": true, 00:03:51.247 "flush": true, 00:03:51.247 "reset": true, 00:03:51.247 "nvme_admin": false, 00:03:51.247 "nvme_io": false, 00:03:51.247 "nvme_io_md": false, 00:03:51.247 "write_zeroes": true, 00:03:51.247 "zcopy": true, 00:03:51.247 "get_zone_info": false, 00:03:51.247 "zone_management": false, 00:03:51.247 "zone_append": false, 00:03:51.247 "compare": false, 00:03:51.247 "compare_and_write": false, 00:03:51.247 "abort": true, 00:03:51.247 "seek_hole": false, 00:03:51.247 "seek_data": false, 00:03:51.247 "copy": true, 00:03:51.247 "nvme_iov_md": false 00:03:51.247 }, 00:03:51.247 "memory_domains": [ 00:03:51.247 { 00:03:51.247 "dma_device_id": "system", 00:03:51.247 "dma_device_type": 1 00:03:51.247 }, 00:03:51.247 { 00:03:51.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.247 "dma_device_type": 2 00:03:51.247 } 00:03:51.247 ], 00:03:51.247 "driver_specific": {} 00:03:51.247 } 00:03:51.247 ]' 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.247 [2024-11-20 07:05:54.595900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:51.247 [2024-11-20 07:05:54.595957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.247 [2024-11-20 07:05:54.595981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1721d20 00:03:51.247 [2024-11-20 07:05:54.595994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.247 [2024-11-20 07:05:54.597346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.247 [2024-11-20 07:05:54.597372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.247 Passthru0 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.247 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.247 { 00:03:51.247 "name": "Malloc0", 00:03:51.247 "aliases": [ 00:03:51.247 "a92660c0-2482-43a1-a508-d63b4a92bf24" 00:03:51.247 ], 00:03:51.247 "product_name": "Malloc disk", 00:03:51.247 "block_size": 512, 00:03:51.247 "num_blocks": 16384, 00:03:51.247 "uuid": "a92660c0-2482-43a1-a508-d63b4a92bf24", 00:03:51.247 "assigned_rate_limits": { 00:03:51.247 "rw_ios_per_sec": 0, 00:03:51.247 "rw_mbytes_per_sec": 0, 00:03:51.247 "r_mbytes_per_sec": 0, 00:03:51.247 "w_mbytes_per_sec": 0 00:03:51.247 }, 00:03:51.247 "claimed": true, 00:03:51.247 "claim_type": "exclusive_write", 00:03:51.247 "zoned": false, 00:03:51.247 "supported_io_types": { 00:03:51.247 "read": true, 00:03:51.247 "write": true, 00:03:51.247 "unmap": true, 00:03:51.247 "flush": true, 00:03:51.247 "reset": true, 00:03:51.247 "nvme_admin": false, 00:03:51.247 "nvme_io": false, 00:03:51.247 "nvme_io_md": false, 00:03:51.247 "write_zeroes": true, 00:03:51.247 "zcopy": true, 00:03:51.247 "get_zone_info": false, 00:03:51.247 "zone_management": false, 00:03:51.247 "zone_append": false, 00:03:51.247 "compare": false, 00:03:51.247 "compare_and_write": false, 00:03:51.247 "abort": true, 00:03:51.247 "seek_hole": false, 00:03:51.247 "seek_data": false, 00:03:51.247 "copy": true, 00:03:51.247 "nvme_iov_md": false 00:03:51.247 }, 00:03:51.247 "memory_domains": [ 00:03:51.247 { 00:03:51.247 "dma_device_id": "system", 00:03:51.247 "dma_device_type": 1 00:03:51.247 }, 00:03:51.247 { 00:03:51.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.247 "dma_device_type": 2 00:03:51.247 } 00:03:51.247 ], 00:03:51.247 "driver_specific": {} 00:03:51.247 }, 00:03:51.247 { 00:03:51.247 "name": "Passthru0", 00:03:51.247 "aliases": [ 00:03:51.247 "4e122ca3-b6cc-596e-aae7-0c7b5c95200d" 00:03:51.247 ], 00:03:51.247 "product_name": "passthru", 00:03:51.247 "block_size": 512, 00:03:51.247 "num_blocks": 16384, 00:03:51.247 "uuid": "4e122ca3-b6cc-596e-aae7-0c7b5c95200d", 00:03:51.247 "assigned_rate_limits": { 00:03:51.247 "rw_ios_per_sec": 0, 00:03:51.247 "rw_mbytes_per_sec": 0, 00:03:51.247 "r_mbytes_per_sec": 0, 00:03:51.247 "w_mbytes_per_sec": 0 00:03:51.247 }, 00:03:51.247 "claimed": false, 00:03:51.247 "zoned": false, 00:03:51.247 "supported_io_types": { 00:03:51.247 "read": true, 00:03:51.247 "write": true, 00:03:51.247 "unmap": true, 00:03:51.247 "flush": true, 00:03:51.247 "reset": true, 00:03:51.247 "nvme_admin": false, 00:03:51.247 "nvme_io": false, 00:03:51.247 "nvme_io_md": false, 00:03:51.247 "write_zeroes": true, 00:03:51.247 "zcopy": true, 00:03:51.247 "get_zone_info": false, 00:03:51.247 "zone_management": false, 00:03:51.247 "zone_append": false, 00:03:51.247 "compare": false, 00:03:51.247 "compare_and_write": false, 00:03:51.247 "abort": true, 00:03:51.247 "seek_hole": false, 00:03:51.247 "seek_data": false, 00:03:51.247 "copy": true, 00:03:51.247 "nvme_iov_md": false 00:03:51.247 }, 00:03:51.247 "memory_domains": [ 00:03:51.247 { 00:03:51.247 "dma_device_id": "system", 00:03:51.247 "dma_device_type": 1 00:03:51.247 }, 00:03:51.247 { 00:03:51.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.247 "dma_device_type": 2 00:03:51.247 } 00:03:51.247 ], 00:03:51.247 "driver_specific": { 00:03:51.247 "passthru": { 00:03:51.247 "name": "Passthru0", 00:03:51.247 "base_bdev_name": "Malloc0" 00:03:51.247 } 00:03:51.247 } 00:03:51.247 } 00:03:51.247 ]' 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.247 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.248 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.248 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.248 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.248 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.248 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.505 07:05:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.505 00:03:51.505 real 0m0.217s 00:03:51.505 user 0m0.140s 00:03:51.505 sys 0m0.022s 00:03:51.505 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.505 07:05:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.505 ************************************ 00:03:51.505 END TEST rpc_integrity 00:03:51.506 ************************************ 00:03:51.506 07:05:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:51.506 07:05:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.506 07:05:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.506 07:05:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 ************************************ 00:03:51.506 START TEST rpc_plugins 00:03:51.506 ************************************ 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.506 { 00:03:51.506 "name": "Malloc1", 00:03:51.506 "aliases": [ 00:03:51.506 "8f45d942-2c71-4800-8a11-59cdc31aae09" 00:03:51.506 ], 00:03:51.506 "product_name": "Malloc disk", 00:03:51.506 "block_size": 4096, 00:03:51.506 "num_blocks": 256, 00:03:51.506 "uuid": "8f45d942-2c71-4800-8a11-59cdc31aae09", 00:03:51.506 "assigned_rate_limits": { 00:03:51.506 "rw_ios_per_sec": 0, 00:03:51.506 "rw_mbytes_per_sec": 0, 00:03:51.506 "r_mbytes_per_sec": 0, 00:03:51.506 "w_mbytes_per_sec": 0 00:03:51.506 }, 00:03:51.506 "claimed": false, 00:03:51.506 "zoned": false, 00:03:51.506 "supported_io_types": { 00:03:51.506 "read": true, 00:03:51.506 "write": true, 00:03:51.506 "unmap": true, 00:03:51.506 "flush": true, 00:03:51.506 "reset": true, 00:03:51.506 "nvme_admin": false, 00:03:51.506 "nvme_io": false, 00:03:51.506 "nvme_io_md": false, 00:03:51.506 "write_zeroes": true, 00:03:51.506 "zcopy": true, 00:03:51.506 "get_zone_info": false, 00:03:51.506 "zone_management": false, 00:03:51.506 "zone_append": false, 00:03:51.506 "compare": false, 00:03:51.506 "compare_and_write": false, 00:03:51.506 "abort": true, 00:03:51.506 "seek_hole": false, 00:03:51.506 "seek_data": false, 00:03:51.506 "copy": true, 00:03:51.506 "nvme_iov_md": false 00:03:51.506 }, 00:03:51.506 "memory_domains": [ 00:03:51.506 { 00:03:51.506 "dma_device_id": "system", 00:03:51.506 "dma_device_type": 1 00:03:51.506 }, 00:03:51.506 { 00:03:51.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.506 "dma_device_type": 2 00:03:51.506 } 00:03:51.506 ], 00:03:51.506 "driver_specific": {} 00:03:51.506 } 00:03:51.506 ]' 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.506 07:05:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.506 00:03:51.506 real 0m0.107s 00:03:51.506 user 0m0.064s 00:03:51.506 sys 0m0.013s 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.506 07:05:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 ************************************ 00:03:51.506 END TEST rpc_plugins 00:03:51.506 ************************************ 00:03:51.506 07:05:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:51.506 07:05:54 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.506 07:05:54 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.506 07:05:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 ************************************ 00:03:51.506 START TEST rpc_trace_cmd_test 00:03:51.506 ************************************ 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:51.506 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2373367", 00:03:51.506 "tpoint_group_mask": "0x8", 00:03:51.506 "iscsi_conn": { 00:03:51.506 "mask": "0x2", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "scsi": { 00:03:51.506 "mask": "0x4", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "bdev": { 00:03:51.506 "mask": "0x8", 00:03:51.506 "tpoint_mask": "0xffffffffffffffff" 00:03:51.506 }, 00:03:51.506 "nvmf_rdma": { 00:03:51.506 "mask": "0x10", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "nvmf_tcp": { 00:03:51.506 "mask": "0x20", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "ftl": { 00:03:51.506 "mask": "0x40", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "blobfs": { 00:03:51.506 "mask": "0x80", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "dsa": { 00:03:51.506 "mask": "0x200", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "thread": { 00:03:51.506 "mask": "0x400", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "nvme_pcie": { 00:03:51.506 "mask": "0x800", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "iaa": { 00:03:51.506 "mask": "0x1000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "nvme_tcp": { 00:03:51.506 "mask": "0x2000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "bdev_nvme": { 00:03:51.506 "mask": "0x4000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "sock": { 00:03:51.506 "mask": "0x8000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "blob": { 00:03:51.506 "mask": "0x10000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "bdev_raid": { 00:03:51.506 "mask": "0x20000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 }, 00:03:51.506 "scheduler": { 00:03:51.506 "mask": "0x40000", 00:03:51.506 "tpoint_mask": "0x0" 00:03:51.506 } 00:03:51.506 }' 00:03:51.506 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:51.764 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:51.764 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.764 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.765 07:05:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.765 00:03:51.765 real 0m0.181s 00:03:51.765 user 0m0.160s 00:03:51.765 sys 0m0.013s 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:51.765 07:05:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.765 ************************************ 00:03:51.765 END TEST rpc_trace_cmd_test 00:03:51.765 ************************************ 00:03:51.765 07:05:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.765 07:05:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.765 07:05:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.765 07:05:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:51.765 07:05:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:51.765 07:05:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.765 ************************************ 00:03:51.765 START TEST rpc_daemon_integrity 00:03:51.765 ************************************ 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.765 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.023 { 00:03:52.023 "name": "Malloc2", 00:03:52.023 "aliases": [ 00:03:52.023 "0a15ecfb-d518-46af-929f-70899120b783" 00:03:52.023 ], 00:03:52.023 "product_name": "Malloc disk", 00:03:52.023 "block_size": 512, 00:03:52.023 "num_blocks": 16384, 00:03:52.023 "uuid": "0a15ecfb-d518-46af-929f-70899120b783", 00:03:52.023 "assigned_rate_limits": { 00:03:52.023 "rw_ios_per_sec": 0, 00:03:52.023 "rw_mbytes_per_sec": 0, 00:03:52.023 "r_mbytes_per_sec": 0, 00:03:52.023 "w_mbytes_per_sec": 0 00:03:52.023 }, 00:03:52.023 "claimed": false, 00:03:52.023 "zoned": false, 00:03:52.023 "supported_io_types": { 00:03:52.023 "read": true, 00:03:52.023 "write": true, 00:03:52.023 "unmap": true, 00:03:52.023 "flush": true, 00:03:52.023 "reset": true, 00:03:52.023 "nvme_admin": false, 00:03:52.023 "nvme_io": false, 00:03:52.023 "nvme_io_md": false, 00:03:52.023 "write_zeroes": true, 00:03:52.023 "zcopy": true, 00:03:52.023 "get_zone_info": false, 00:03:52.023 "zone_management": false, 00:03:52.023 "zone_append": false, 00:03:52.023 "compare": false, 00:03:52.023 "compare_and_write": false, 00:03:52.023 "abort": true, 00:03:52.023 "seek_hole": false, 00:03:52.023 "seek_data": false, 00:03:52.023 "copy": true, 00:03:52.023 "nvme_iov_md": false 00:03:52.023 }, 00:03:52.023 "memory_domains": [ 00:03:52.023 { 00:03:52.023 "dma_device_id": "system", 00:03:52.023 "dma_device_type": 1 00:03:52.023 }, 00:03:52.023 { 00:03:52.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.023 "dma_device_type": 2 00:03:52.023 } 00:03:52.023 ], 00:03:52.023 "driver_specific": {} 00:03:52.023 } 00:03:52.023 ]' 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.023 [2024-11-20 07:05:55.238157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:52.023 [2024-11-20 07:05:55.238210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:52.023 [2024-11-20 07:05:55.238233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15def10 00:03:52.023 [2024-11-20 07:05:55.238245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:52.023 [2024-11-20 07:05:55.239464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:52.023 [2024-11-20 07:05:55.239495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:52.023 Passthru0 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.023 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:52.023 { 00:03:52.023 "name": "Malloc2", 00:03:52.024 "aliases": [ 00:03:52.024 "0a15ecfb-d518-46af-929f-70899120b783" 00:03:52.024 ], 00:03:52.024 "product_name": "Malloc disk", 00:03:52.024 "block_size": 512, 00:03:52.024 "num_blocks": 16384, 00:03:52.024 "uuid": "0a15ecfb-d518-46af-929f-70899120b783", 00:03:52.024 "assigned_rate_limits": { 00:03:52.024 "rw_ios_per_sec": 0, 00:03:52.024 "rw_mbytes_per_sec": 0, 00:03:52.024 "r_mbytes_per_sec": 0, 00:03:52.024 "w_mbytes_per_sec": 0 00:03:52.024 }, 00:03:52.024 "claimed": true, 00:03:52.024 "claim_type": "exclusive_write", 00:03:52.024 "zoned": false, 00:03:52.024 "supported_io_types": { 00:03:52.024 "read": true, 00:03:52.024 "write": true, 00:03:52.024 "unmap": true, 00:03:52.024 "flush": true, 00:03:52.024 "reset": true, 00:03:52.024 "nvme_admin": false, 00:03:52.024 "nvme_io": false, 00:03:52.024 "nvme_io_md": false, 00:03:52.024 "write_zeroes": true, 00:03:52.024 "zcopy": true, 00:03:52.024 "get_zone_info": false, 00:03:52.024 "zone_management": false, 00:03:52.024 "zone_append": false, 00:03:52.024 "compare": false, 00:03:52.024 "compare_and_write": false, 00:03:52.024 "abort": true, 00:03:52.024 "seek_hole": false, 00:03:52.024 "seek_data": false, 00:03:52.024 "copy": true, 00:03:52.024 "nvme_iov_md": false 00:03:52.024 }, 00:03:52.024 "memory_domains": [ 00:03:52.024 { 00:03:52.024 "dma_device_id": "system", 00:03:52.024 "dma_device_type": 1 00:03:52.024 }, 00:03:52.024 { 00:03:52.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.024 "dma_device_type": 2 00:03:52.024 } 00:03:52.024 ], 00:03:52.024 "driver_specific": {} 00:03:52.024 }, 00:03:52.024 { 00:03:52.024 "name": "Passthru0", 00:03:52.024 "aliases": [ 00:03:52.024 "787dbc17-ed16-5097-961d-d700fb2f9dc9" 00:03:52.024 ], 00:03:52.024 "product_name": "passthru", 00:03:52.024 "block_size": 512, 00:03:52.024 "num_blocks": 16384, 00:03:52.024 "uuid": "787dbc17-ed16-5097-961d-d700fb2f9dc9", 00:03:52.024 "assigned_rate_limits": { 00:03:52.024 "rw_ios_per_sec": 0, 00:03:52.024 "rw_mbytes_per_sec": 0, 00:03:52.024 "r_mbytes_per_sec": 0, 00:03:52.024 "w_mbytes_per_sec": 0 00:03:52.024 }, 00:03:52.024 "claimed": false, 00:03:52.024 "zoned": false, 00:03:52.024 "supported_io_types": { 00:03:52.024 "read": true, 00:03:52.024 "write": true, 00:03:52.024 "unmap": true, 00:03:52.024 "flush": true, 00:03:52.024 "reset": true, 00:03:52.024 "nvme_admin": false, 00:03:52.024 "nvme_io": false, 00:03:52.024 "nvme_io_md": false, 00:03:52.024 "write_zeroes": true, 00:03:52.024 "zcopy": true, 00:03:52.024 "get_zone_info": false, 00:03:52.024 "zone_management": false, 00:03:52.024 "zone_append": false, 00:03:52.024 "compare": false, 00:03:52.024 "compare_and_write": false, 00:03:52.024 "abort": true, 00:03:52.024 "seek_hole": false, 00:03:52.024 "seek_data": false, 00:03:52.024 "copy": true, 00:03:52.024 "nvme_iov_md": false 00:03:52.024 }, 00:03:52.024 "memory_domains": [ 00:03:52.024 { 00:03:52.024 "dma_device_id": "system", 00:03:52.024 "dma_device_type": 1 00:03:52.024 }, 00:03:52.024 { 00:03:52.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.024 "dma_device_type": 2 00:03:52.024 } 00:03:52.024 ], 00:03:52.024 "driver_specific": { 00:03:52.024 "passthru": { 00:03:52.024 "name": "Passthru0", 00:03:52.024 "base_bdev_name": "Malloc2" 00:03:52.024 } 00:03:52.024 } 00:03:52.024 } 00:03:52.024 ]' 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:52.024 00:03:52.024 real 0m0.211s 00:03:52.024 user 0m0.137s 00:03:52.024 sys 0m0.020s 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:52.024 07:05:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.024 ************************************ 00:03:52.024 END TEST rpc_daemon_integrity 00:03:52.024 ************************************ 00:03:52.024 07:05:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:52.024 07:05:55 rpc -- rpc/rpc.sh@84 -- # killprocess 2373367 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@952 -- # '[' -z 2373367 ']' 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@956 -- # kill -0 2373367 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@957 -- # uname 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2373367 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2373367' 00:03:52.024 killing process with pid 2373367 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@971 -- # kill 2373367 00:03:52.024 07:05:55 rpc -- common/autotest_common.sh@976 -- # wait 2373367 00:03:52.590 00:03:52.590 real 0m1.939s 00:03:52.590 user 0m2.396s 00:03:52.590 sys 0m0.603s 00:03:52.590 07:05:55 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:52.590 07:05:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.590 ************************************ 00:03:52.590 END TEST rpc 00:03:52.590 ************************************ 00:03:52.590 07:05:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.590 07:05:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:52.590 07:05:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:52.590 07:05:55 -- common/autotest_common.sh@10 -- # set +x 00:03:52.590 ************************************ 00:03:52.590 START TEST skip_rpc 00:03:52.590 ************************************ 00:03:52.590 07:05:55 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.590 * Looking for test storage... 00:03:52.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.590 07:05:55 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:52.590 07:05:55 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:52.590 07:05:55 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.590 07:05:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:52.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.590 --rc genhtml_branch_coverage=1 00:03:52.590 --rc genhtml_function_coverage=1 00:03:52.590 --rc genhtml_legend=1 00:03:52.590 --rc geninfo_all_blocks=1 00:03:52.590 --rc geninfo_unexecuted_blocks=1 00:03:52.590 00:03:52.590 ' 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:52.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.590 --rc genhtml_branch_coverage=1 00:03:52.590 --rc genhtml_function_coverage=1 00:03:52.590 --rc genhtml_legend=1 00:03:52.590 --rc geninfo_all_blocks=1 00:03:52.590 --rc geninfo_unexecuted_blocks=1 00:03:52.590 00:03:52.590 ' 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:52.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.590 --rc genhtml_branch_coverage=1 00:03:52.590 --rc genhtml_function_coverage=1 00:03:52.590 --rc genhtml_legend=1 00:03:52.590 --rc geninfo_all_blocks=1 00:03:52.590 --rc geninfo_unexecuted_blocks=1 00:03:52.590 00:03:52.590 ' 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:52.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.590 --rc genhtml_branch_coverage=1 00:03:52.590 --rc genhtml_function_coverage=1 00:03:52.590 --rc genhtml_legend=1 00:03:52.590 --rc geninfo_all_blocks=1 00:03:52.590 --rc geninfo_unexecuted_blocks=1 00:03:52.590 00:03:52.590 ' 00:03:52.590 07:05:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.590 07:05:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.590 07:05:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:52.590 07:05:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.848 ************************************ 00:03:52.848 START TEST skip_rpc 00:03:52.848 ************************************ 00:03:52.848 07:05:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:52.848 07:05:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2373710 00:03:52.848 07:05:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.848 07:05:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.848 07:05:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.848 [2024-11-20 07:05:56.100549] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:03:52.848 [2024-11-20 07:05:56.100626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2373710 ] 00:03:52.848 [2024-11-20 07:05:56.161131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.848 [2024-11-20 07:05:56.220858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2373710 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2373710 ']' 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2373710 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2373710 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2373710' 00:03:58.113 killing process with pid 2373710 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2373710 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2373710 00:03:58.113 00:03:58.113 real 0m5.454s 00:03:58.113 user 0m5.166s 00:03:58.113 sys 0m0.305s 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:58.113 07:06:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.113 ************************************ 00:03:58.113 END TEST skip_rpc 00:03:58.113 ************************************ 00:03:58.113 07:06:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:58.113 07:06:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:58.113 07:06:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:58.113 07:06:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.371 ************************************ 00:03:58.371 START TEST skip_rpc_with_json 00:03:58.371 ************************************ 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2374495 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2374495 00:03:58.371 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2374495 ']' 00:03:58.372 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.372 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:58.372 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.372 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:58.372 07:06:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.372 [2024-11-20 07:06:01.607538] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:03:58.372 [2024-11-20 07:06:01.607643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374495 ] 00:03:58.372 [2024-11-20 07:06:01.674095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.372 [2024-11-20 07:06:01.735160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.630 [2024-11-20 07:06:02.013182] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:58.630 request: 00:03:58.630 { 00:03:58.630 "trtype": "tcp", 00:03:58.630 "method": "nvmf_get_transports", 00:03:58.630 "req_id": 1 00:03:58.630 } 00:03:58.630 Got JSON-RPC error response 00:03:58.630 response: 00:03:58.630 { 00:03:58.630 "code": -19, 00:03:58.630 "message": "No such device" 00:03:58.630 } 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.630 [2024-11-20 07:06:02.021300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.630 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.889 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.889 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.889 { 00:03:58.889 "subsystems": [ 00:03:58.889 { 00:03:58.889 "subsystem": "fsdev", 00:03:58.889 "config": [ 00:03:58.889 { 00:03:58.889 "method": "fsdev_set_opts", 00:03:58.889 "params": { 00:03:58.889 "fsdev_io_pool_size": 65535, 00:03:58.889 "fsdev_io_cache_size": 256 00:03:58.889 } 00:03:58.889 } 00:03:58.889 ] 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "vfio_user_target", 00:03:58.889 "config": null 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "keyring", 00:03:58.889 "config": [] 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "iobuf", 00:03:58.889 "config": [ 00:03:58.889 { 00:03:58.889 "method": "iobuf_set_options", 00:03:58.889 "params": { 00:03:58.889 "small_pool_count": 8192, 00:03:58.889 "large_pool_count": 1024, 00:03:58.889 "small_bufsize": 8192, 00:03:58.889 "large_bufsize": 135168, 00:03:58.889 "enable_numa": false 00:03:58.889 } 00:03:58.889 } 00:03:58.889 ] 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "sock", 00:03:58.889 "config": [ 00:03:58.889 { 00:03:58.889 "method": "sock_set_default_impl", 00:03:58.889 "params": { 00:03:58.889 "impl_name": "posix" 00:03:58.889 } 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "method": "sock_impl_set_options", 00:03:58.889 "params": { 00:03:58.889 "impl_name": "ssl", 00:03:58.889 "recv_buf_size": 4096, 00:03:58.889 "send_buf_size": 4096, 00:03:58.889 "enable_recv_pipe": true, 00:03:58.889 "enable_quickack": false, 00:03:58.889 "enable_placement_id": 0, 00:03:58.889 "enable_zerocopy_send_server": true, 00:03:58.889 "enable_zerocopy_send_client": false, 00:03:58.889 "zerocopy_threshold": 0, 00:03:58.889 "tls_version": 0, 00:03:58.889 "enable_ktls": false 00:03:58.889 } 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "method": "sock_impl_set_options", 00:03:58.889 "params": { 00:03:58.889 "impl_name": "posix", 00:03:58.889 "recv_buf_size": 2097152, 00:03:58.889 "send_buf_size": 2097152, 00:03:58.889 "enable_recv_pipe": true, 00:03:58.889 "enable_quickack": false, 00:03:58.889 "enable_placement_id": 0, 00:03:58.889 "enable_zerocopy_send_server": true, 00:03:58.889 "enable_zerocopy_send_client": false, 00:03:58.889 "zerocopy_threshold": 0, 00:03:58.889 "tls_version": 0, 00:03:58.889 "enable_ktls": false 00:03:58.889 } 00:03:58.889 } 00:03:58.889 ] 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "vmd", 00:03:58.889 "config": [] 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "accel", 00:03:58.889 "config": [ 00:03:58.889 { 00:03:58.889 "method": "accel_set_options", 00:03:58.889 "params": { 00:03:58.889 "small_cache_size": 128, 00:03:58.889 "large_cache_size": 16, 00:03:58.889 "task_count": 2048, 00:03:58.889 "sequence_count": 2048, 00:03:58.889 "buf_count": 2048 00:03:58.889 } 00:03:58.889 } 00:03:58.889 ] 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "subsystem": "bdev", 00:03:58.889 "config": [ 00:03:58.889 { 00:03:58.889 "method": "bdev_set_options", 00:03:58.889 "params": { 00:03:58.889 "bdev_io_pool_size": 65535, 00:03:58.889 "bdev_io_cache_size": 256, 00:03:58.889 "bdev_auto_examine": true, 00:03:58.889 "iobuf_small_cache_size": 128, 00:03:58.889 "iobuf_large_cache_size": 16 00:03:58.889 } 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "method": "bdev_raid_set_options", 00:03:58.889 "params": { 00:03:58.889 "process_window_size_kb": 1024, 00:03:58.889 "process_max_bandwidth_mb_sec": 0 00:03:58.889 } 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "method": "bdev_iscsi_set_options", 00:03:58.889 "params": { 00:03:58.889 "timeout_sec": 30 00:03:58.889 } 00:03:58.889 }, 00:03:58.889 { 00:03:58.889 "method": "bdev_nvme_set_options", 00:03:58.889 "params": { 00:03:58.889 "action_on_timeout": "none", 00:03:58.889 "timeout_us": 0, 00:03:58.889 "timeout_admin_us": 0, 00:03:58.889 "keep_alive_timeout_ms": 10000, 00:03:58.889 "arbitration_burst": 0, 00:03:58.889 "low_priority_weight": 0, 00:03:58.889 "medium_priority_weight": 0, 00:03:58.889 "high_priority_weight": 0, 00:03:58.889 "nvme_adminq_poll_period_us": 10000, 00:03:58.889 "nvme_ioq_poll_period_us": 0, 00:03:58.889 "io_queue_requests": 0, 00:03:58.889 "delay_cmd_submit": true, 00:03:58.889 "transport_retry_count": 4, 00:03:58.889 "bdev_retry_count": 3, 00:03:58.889 "transport_ack_timeout": 0, 00:03:58.889 "ctrlr_loss_timeout_sec": 0, 00:03:58.889 "reconnect_delay_sec": 0, 00:03:58.889 "fast_io_fail_timeout_sec": 0, 00:03:58.889 "disable_auto_failback": false, 00:03:58.889 "generate_uuids": false, 00:03:58.889 "transport_tos": 0, 00:03:58.889 "nvme_error_stat": false, 00:03:58.889 "rdma_srq_size": 0, 00:03:58.889 "io_path_stat": false, 00:03:58.889 "allow_accel_sequence": false, 00:03:58.889 "rdma_max_cq_size": 0, 00:03:58.889 "rdma_cm_event_timeout_ms": 0, 00:03:58.889 "dhchap_digests": [ 00:03:58.889 "sha256", 00:03:58.889 "sha384", 00:03:58.889 "sha512" 00:03:58.889 ], 00:03:58.889 "dhchap_dhgroups": [ 00:03:58.889 "null", 00:03:58.889 "ffdhe2048", 00:03:58.889 "ffdhe3072", 00:03:58.890 "ffdhe4096", 00:03:58.890 "ffdhe6144", 00:03:58.890 "ffdhe8192" 00:03:58.890 ] 00:03:58.890 } 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "method": "bdev_nvme_set_hotplug", 00:03:58.890 "params": { 00:03:58.890 "period_us": 100000, 00:03:58.890 "enable": false 00:03:58.890 } 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "method": "bdev_wait_for_examine" 00:03:58.890 } 00:03:58.890 ] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "scsi", 00:03:58.890 "config": null 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "scheduler", 00:03:58.890 "config": [ 00:03:58.890 { 00:03:58.890 "method": "framework_set_scheduler", 00:03:58.890 "params": { 00:03:58.890 "name": "static" 00:03:58.890 } 00:03:58.890 } 00:03:58.890 ] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "vhost_scsi", 00:03:58.890 "config": [] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "vhost_blk", 00:03:58.890 "config": [] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "ublk", 00:03:58.890 "config": [] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "nbd", 00:03:58.890 "config": [] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "nvmf", 00:03:58.890 "config": [ 00:03:58.890 { 00:03:58.890 "method": "nvmf_set_config", 00:03:58.890 "params": { 00:03:58.890 "discovery_filter": "match_any", 00:03:58.890 "admin_cmd_passthru": { 00:03:58.890 "identify_ctrlr": false 00:03:58.890 }, 00:03:58.890 "dhchap_digests": [ 00:03:58.890 "sha256", 00:03:58.890 "sha384", 00:03:58.890 "sha512" 00:03:58.890 ], 00:03:58.890 "dhchap_dhgroups": [ 00:03:58.890 "null", 00:03:58.890 "ffdhe2048", 00:03:58.890 "ffdhe3072", 00:03:58.890 "ffdhe4096", 00:03:58.890 "ffdhe6144", 00:03:58.890 "ffdhe8192" 00:03:58.890 ] 00:03:58.890 } 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "method": "nvmf_set_max_subsystems", 00:03:58.890 "params": { 00:03:58.890 "max_subsystems": 1024 00:03:58.890 } 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "method": "nvmf_set_crdt", 00:03:58.890 "params": { 00:03:58.890 "crdt1": 0, 00:03:58.890 "crdt2": 0, 00:03:58.890 "crdt3": 0 00:03:58.890 } 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "method": "nvmf_create_transport", 00:03:58.890 "params": { 00:03:58.890 "trtype": "TCP", 00:03:58.890 "max_queue_depth": 128, 00:03:58.890 "max_io_qpairs_per_ctrlr": 127, 00:03:58.890 "in_capsule_data_size": 4096, 00:03:58.890 "max_io_size": 131072, 00:03:58.890 "io_unit_size": 131072, 00:03:58.890 "max_aq_depth": 128, 00:03:58.890 "num_shared_buffers": 511, 00:03:58.890 "buf_cache_size": 4294967295, 00:03:58.890 "dif_insert_or_strip": false, 00:03:58.890 "zcopy": false, 00:03:58.890 "c2h_success": true, 00:03:58.890 "sock_priority": 0, 00:03:58.890 "abort_timeout_sec": 1, 00:03:58.890 "ack_timeout": 0, 00:03:58.890 "data_wr_pool_size": 0 00:03:58.890 } 00:03:58.890 } 00:03:58.890 ] 00:03:58.890 }, 00:03:58.890 { 00:03:58.890 "subsystem": "iscsi", 00:03:58.890 "config": [ 00:03:58.890 { 00:03:58.890 "method": "iscsi_set_options", 00:03:58.890 "params": { 00:03:58.890 "node_base": "iqn.2016-06.io.spdk", 00:03:58.890 "max_sessions": 128, 00:03:58.890 "max_connections_per_session": 2, 00:03:58.890 "max_queue_depth": 64, 00:03:58.890 "default_time2wait": 2, 00:03:58.890 "default_time2retain": 20, 00:03:58.890 "first_burst_length": 8192, 00:03:58.890 "immediate_data": true, 00:03:58.890 "allow_duplicated_isid": false, 00:03:58.890 "error_recovery_level": 0, 00:03:58.890 "nop_timeout": 60, 00:03:58.890 "nop_in_interval": 30, 00:03:58.890 "disable_chap": false, 00:03:58.890 "require_chap": false, 00:03:58.890 "mutual_chap": false, 00:03:58.890 "chap_group": 0, 00:03:58.890 "max_large_datain_per_connection": 64, 00:03:58.890 "max_r2t_per_connection": 4, 00:03:58.890 "pdu_pool_size": 36864, 00:03:58.890 "immediate_data_pool_size": 16384, 00:03:58.890 "data_out_pool_size": 2048 00:03:58.890 } 00:03:58.890 } 00:03:58.890 ] 00:03:58.890 } 00:03:58.890 ] 00:03:58.890 } 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2374495 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2374495 ']' 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2374495 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2374495 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2374495' 00:03:58.890 killing process with pid 2374495 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2374495 00:03:58.890 07:06:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2374495 00:03:59.456 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2374656 00:03:59.457 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.457 07:06:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2374656 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2374656 ']' 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2374656 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2374656 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2374656' 00:04:04.736 killing process with pid 2374656 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2374656 00:04:04.736 07:06:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2374656 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.736 00:04:04.736 real 0m6.564s 00:04:04.736 user 0m6.194s 00:04:04.736 sys 0m0.690s 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.736 ************************************ 00:04:04.736 END TEST skip_rpc_with_json 00:04:04.736 ************************************ 00:04:04.736 07:06:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.736 07:06:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.736 07:06:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.736 07:06:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.736 ************************************ 00:04:04.736 START TEST skip_rpc_with_delay 00:04:04.736 ************************************ 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.736 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.995 [2024-11-20 07:06:08.222336] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:04.995 00:04:04.995 real 0m0.075s 00:04:04.995 user 0m0.047s 00:04:04.995 sys 0m0.028s 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.995 07:06:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 ************************************ 00:04:04.995 END TEST skip_rpc_with_delay 00:04:04.995 ************************************ 00:04:04.995 07:06:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.995 07:06:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.995 07:06:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.995 07:06:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.995 07:06:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.995 07:06:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 ************************************ 00:04:04.995 START TEST exit_on_failed_rpc_init 00:04:04.995 ************************************ 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2375731 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2375731 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2375731 ']' 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:04.995 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 [2024-11-20 07:06:08.349421] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:04.995 [2024-11-20 07:06:08.349518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375731 ] 00:04:04.995 [2024-11-20 07:06:08.414870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.253 [2024-11-20 07:06:08.477052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:05.512 07:06:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:05.512 [2024-11-20 07:06:08.813772] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:05.512 [2024-11-20 07:06:08.813860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375910 ] 00:04:05.512 [2024-11-20 07:06:08.880441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.512 [2024-11-20 07:06:08.941362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.512 [2024-11-20 07:06:08.941469] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.512 [2024-11-20 07:06:08.941489] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.512 [2024-11-20 07:06:08.941500] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:05.769 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:05.769 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2375731 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2375731 ']' 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2375731 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2375731 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2375731' 00:04:05.770 killing process with pid 2375731 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2375731 00:04:05.770 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2375731 00:04:06.027 00:04:06.027 real 0m1.156s 00:04:06.027 user 0m1.271s 00:04:06.027 sys 0m0.428s 00:04:06.027 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.027 07:06:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.027 ************************************ 00:04:06.027 END TEST exit_on_failed_rpc_init 00:04:06.027 ************************************ 00:04:06.286 07:06:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.286 00:04:06.286 real 0m13.604s 00:04:06.286 user 0m12.852s 00:04:06.286 sys 0m1.651s 00:04:06.286 07:06:09 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.286 07:06:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.286 ************************************ 00:04:06.286 END TEST skip_rpc 00:04:06.286 ************************************ 00:04:06.286 07:06:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:06.286 07:06:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.286 07:06:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.286 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:04:06.286 ************************************ 00:04:06.286 START TEST rpc_client 00:04:06.286 ************************************ 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:06.286 * Looking for test storage... 00:04:06.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.286 07:06:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.286 --rc genhtml_branch_coverage=1 00:04:06.286 --rc genhtml_function_coverage=1 00:04:06.286 --rc genhtml_legend=1 00:04:06.286 --rc geninfo_all_blocks=1 00:04:06.286 --rc geninfo_unexecuted_blocks=1 00:04:06.286 00:04:06.286 ' 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.286 --rc genhtml_branch_coverage=1 00:04:06.286 --rc genhtml_function_coverage=1 00:04:06.286 --rc genhtml_legend=1 00:04:06.286 --rc geninfo_all_blocks=1 00:04:06.286 --rc geninfo_unexecuted_blocks=1 00:04:06.286 00:04:06.286 ' 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.286 --rc genhtml_branch_coverage=1 00:04:06.286 --rc genhtml_function_coverage=1 00:04:06.286 --rc genhtml_legend=1 00:04:06.286 --rc geninfo_all_blocks=1 00:04:06.286 --rc geninfo_unexecuted_blocks=1 00:04:06.286 00:04:06.286 ' 00:04:06.286 07:06:09 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.286 --rc genhtml_branch_coverage=1 00:04:06.286 --rc genhtml_function_coverage=1 00:04:06.286 --rc genhtml_legend=1 00:04:06.286 --rc geninfo_all_blocks=1 00:04:06.286 --rc geninfo_unexecuted_blocks=1 00:04:06.286 00:04:06.286 ' 00:04:06.286 07:06:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:06.286 OK 00:04:06.286 07:06:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:06.287 00:04:06.287 real 0m0.156s 00:04:06.287 user 0m0.112s 00:04:06.287 sys 0m0.052s 00:04:06.287 07:06:09 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:06.287 07:06:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:06.287 ************************************ 00:04:06.287 END TEST rpc_client 00:04:06.287 ************************************ 00:04:06.287 07:06:09 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:06.287 07:06:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:06.287 07:06:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.287 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:04:06.545 ************************************ 00:04:06.545 START TEST json_config 00:04:06.545 ************************************ 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.545 07:06:09 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.545 07:06:09 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.545 07:06:09 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.545 07:06:09 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.545 07:06:09 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.545 07:06:09 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:06.545 07:06:09 json_config -- scripts/common.sh@345 -- # : 1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.545 07:06:09 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.545 07:06:09 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@353 -- # local d=1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.545 07:06:09 json_config -- scripts/common.sh@355 -- # echo 1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.545 07:06:09 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@353 -- # local d=2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.545 07:06:09 json_config -- scripts/common.sh@355 -- # echo 2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.545 07:06:09 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.545 07:06:09 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.545 07:06:09 json_config -- scripts/common.sh@368 -- # return 0 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.545 07:06:09 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.545 --rc genhtml_branch_coverage=1 00:04:06.545 --rc genhtml_function_coverage=1 00:04:06.545 --rc genhtml_legend=1 00:04:06.545 --rc geninfo_all_blocks=1 00:04:06.545 --rc geninfo_unexecuted_blocks=1 00:04:06.545 00:04:06.545 ' 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.546 --rc genhtml_branch_coverage=1 00:04:06.546 --rc genhtml_function_coverage=1 00:04:06.546 --rc genhtml_legend=1 00:04:06.546 --rc geninfo_all_blocks=1 00:04:06.546 --rc geninfo_unexecuted_blocks=1 00:04:06.546 00:04:06.546 ' 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.546 --rc genhtml_branch_coverage=1 00:04:06.546 --rc genhtml_function_coverage=1 00:04:06.546 --rc genhtml_legend=1 00:04:06.546 --rc geninfo_all_blocks=1 00:04:06.546 --rc geninfo_unexecuted_blocks=1 00:04:06.546 00:04:06.546 ' 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.546 --rc genhtml_branch_coverage=1 00:04:06.546 --rc genhtml_function_coverage=1 00:04:06.546 --rc genhtml_legend=1 00:04:06.546 --rc geninfo_all_blocks=1 00:04:06.546 --rc geninfo_unexecuted_blocks=1 00:04:06.546 00:04:06.546 ' 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.546 07:06:09 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.546 07:06:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.546 07:06:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.546 07:06:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.546 07:06:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.546 07:06:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.546 07:06:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.546 07:06:09 json_config -- paths/export.sh@5 -- # export PATH 00:04:06.546 07:06:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@51 -- # : 0 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.546 07:06:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:06.546 INFO: JSON configuration test init 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.546 07:06:09 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.546 07:06:09 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.546 07:06:09 json_config -- json_config/common.sh@10 -- # shift 00:04:06.546 07:06:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.546 07:06:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.546 07:06:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.546 07:06:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.546 07:06:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.546 07:06:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2376265 00:04:06.546 07:06:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.546 07:06:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.546 Waiting for target to run... 00:04:06.546 07:06:09 json_config -- json_config/common.sh@25 -- # waitforlisten 2376265 /var/tmp/spdk_tgt.sock 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@833 -- # '[' -z 2376265 ']' 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:06.546 07:06:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.546 [2024-11-20 07:06:09.927208] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:06.546 [2024-11-20 07:06:09.927296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376265 ] 00:04:07.115 [2024-11-20 07:06:10.464503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.115 [2024-11-20 07:06:10.516780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.681 07:06:10 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:07.681 07:06:10 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:07.681 07:06:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:07.681 00:04:07.681 07:06:10 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:07.681 07:06:10 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:07.681 07:06:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.681 07:06:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.681 07:06:10 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:07.681 07:06:10 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:07.681 07:06:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:07.681 07:06:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.681 07:06:10 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:07.681 07:06:10 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:07.681 07:06:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:10.969 07:06:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.969 07:06:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:10.969 07:06:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:10.969 07:06:14 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@54 -- # sort 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:11.227 07:06:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:11.227 07:06:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:11.227 07:06:14 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:11.228 07:06:14 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:11.228 07:06:14 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:11.228 07:06:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.228 07:06:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.228 07:06:14 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:11.228 07:06:14 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:11.228 07:06:14 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:11.228 07:06:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:11.228 07:06:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:11.485 MallocForNvmf0 00:04:11.486 07:06:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:11.486 07:06:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:11.747 MallocForNvmf1 00:04:11.747 07:06:14 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:11.747 07:06:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:12.005 [2024-11-20 07:06:15.203228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.005 07:06:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:12.005 07:06:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:12.262 07:06:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:12.262 07:06:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:12.519 07:06:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:12.519 07:06:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:12.777 07:06:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:12.777 07:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:13.036 [2024-11-20 07:06:16.262688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:13.036 07:06:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:13.036 07:06:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.036 07:06:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.036 07:06:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:13.036 07:06:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.036 07:06:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.036 07:06:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:13.036 07:06:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:13.036 07:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:13.293 MallocBdevForConfigChangeCheck 00:04:13.293 07:06:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:13.294 07:06:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:13.294 07:06:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.294 07:06:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:13.294 07:06:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.860 07:06:17 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:13.860 INFO: shutting down applications... 00:04:13.860 07:06:17 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:13.860 07:06:17 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:13.860 07:06:17 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:13.860 07:06:17 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:15.250 Calling clear_iscsi_subsystem 00:04:15.250 Calling clear_nvmf_subsystem 00:04:15.250 Calling clear_nbd_subsystem 00:04:15.250 Calling clear_ublk_subsystem 00:04:15.250 Calling clear_vhost_blk_subsystem 00:04:15.250 Calling clear_vhost_scsi_subsystem 00:04:15.250 Calling clear_bdev_subsystem 00:04:15.250 07:06:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:15.250 07:06:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:15.250 07:06:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:15.251 07:06:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.251 07:06:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:15.251 07:06:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:15.816 07:06:19 json_config -- json_config/json_config.sh@352 -- # break 00:04:15.816 07:06:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:15.816 07:06:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:15.816 07:06:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:15.816 07:06:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.816 07:06:19 json_config -- json_config/common.sh@35 -- # [[ -n 2376265 ]] 00:04:15.816 07:06:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2376265 00:04:15.816 07:06:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.816 07:06:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.816 07:06:19 json_config -- json_config/common.sh@41 -- # kill -0 2376265 00:04:15.816 07:06:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.381 07:06:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.381 07:06:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.381 07:06:19 json_config -- json_config/common.sh@41 -- # kill -0 2376265 00:04:16.381 07:06:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.381 07:06:19 json_config -- json_config/common.sh@43 -- # break 00:04:16.381 07:06:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.381 07:06:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.381 SPDK target shutdown done 00:04:16.381 07:06:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:16.381 INFO: relaunching applications... 00:04:16.381 07:06:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.381 07:06:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.381 07:06:19 json_config -- json_config/common.sh@10 -- # shift 00:04:16.381 07:06:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.381 07:06:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.381 07:06:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.381 07:06:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.381 07:06:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.381 07:06:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2377468 00:04:16.381 07:06:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.381 07:06:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.381 Waiting for target to run... 00:04:16.381 07:06:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2377468 /var/tmp/spdk_tgt.sock 00:04:16.381 07:06:19 json_config -- common/autotest_common.sh@833 -- # '[' -z 2377468 ']' 00:04:16.381 07:06:19 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.381 07:06:19 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.381 07:06:19 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.381 07:06:19 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.381 07:06:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.381 [2024-11-20 07:06:19.633867] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:16.381 [2024-11-20 07:06:19.633951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377468 ] 00:04:16.950 [2024-11-20 07:06:20.183718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.950 [2024-11-20 07:06:20.237231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.240 [2024-11-20 07:06:23.292298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.240 [2024-11-20 07:06:23.324747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.240 07:06:23 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:20.240 07:06:23 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:20.240 07:06:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.240 00:04:20.240 07:06:23 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:20.240 07:06:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:20.240 INFO: Checking if target configuration is the same... 00:04:20.240 07:06:23 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.240 07:06:23 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:20.240 07:06:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.240 + '[' 2 -ne 2 ']' 00:04:20.240 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.240 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:20.240 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.240 +++ basename /dev/fd/62 00:04:20.241 ++ mktemp /tmp/62.XXX 00:04:20.241 + tmp_file_1=/tmp/62.B1p 00:04:20.241 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.241 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.241 + tmp_file_2=/tmp/spdk_tgt_config.json.iao 00:04:20.241 + ret=0 00:04:20.241 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.498 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.498 + diff -u /tmp/62.B1p /tmp/spdk_tgt_config.json.iao 00:04:20.498 + echo 'INFO: JSON config files are the same' 00:04:20.498 INFO: JSON config files are the same 00:04:20.498 + rm /tmp/62.B1p /tmp/spdk_tgt_config.json.iao 00:04:20.498 + exit 0 00:04:20.498 07:06:23 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:20.498 07:06:23 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.498 INFO: changing configuration and checking if this can be detected... 00:04:20.498 07:06:23 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.498 07:06:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.756 07:06:24 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.756 07:06:24 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:20.756 07:06:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.756 + '[' 2 -ne 2 ']' 00:04:20.756 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.756 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:20.756 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.756 +++ basename /dev/fd/62 00:04:20.756 ++ mktemp /tmp/62.XXX 00:04:20.756 + tmp_file_1=/tmp/62.sRR 00:04:20.756 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.756 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.756 + tmp_file_2=/tmp/spdk_tgt_config.json.4Rg 00:04:20.756 + ret=0 00:04:20.756 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.322 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.322 + diff -u /tmp/62.sRR /tmp/spdk_tgt_config.json.4Rg 00:04:21.322 + ret=1 00:04:21.322 + echo '=== Start of file: /tmp/62.sRR ===' 00:04:21.322 + cat /tmp/62.sRR 00:04:21.322 + echo '=== End of file: /tmp/62.sRR ===' 00:04:21.322 + echo '' 00:04:21.322 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4Rg ===' 00:04:21.322 + cat /tmp/spdk_tgt_config.json.4Rg 00:04:21.322 + echo '=== End of file: /tmp/spdk_tgt_config.json.4Rg ===' 00:04:21.322 + echo '' 00:04:21.322 + rm /tmp/62.sRR /tmp/spdk_tgt_config.json.4Rg 00:04:21.322 + exit 1 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:21.322 INFO: configuration change detected. 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@324 -- # [[ -n 2377468 ]] 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.322 07:06:24 json_config -- json_config/json_config.sh@330 -- # killprocess 2377468 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@952 -- # '[' -z 2377468 ']' 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@956 -- # kill -0 2377468 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@957 -- # uname 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2377468 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2377468' 00:04:21.322 killing process with pid 2377468 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@971 -- # kill 2377468 00:04:21.322 07:06:24 json_config -- common/autotest_common.sh@976 -- # wait 2377468 00:04:23.219 07:06:26 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.219 07:06:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:23.219 07:06:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.219 07:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.219 07:06:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:23.219 07:06:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:23.219 INFO: Success 00:04:23.219 00:04:23.219 real 0m16.536s 00:04:23.219 user 0m17.852s 00:04:23.219 sys 0m2.944s 00:04:23.219 07:06:26 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.219 07:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.219 ************************************ 00:04:23.219 END TEST json_config 00:04:23.219 ************************************ 00:04:23.219 07:06:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:23.219 07:06:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.219 07:06:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.219 07:06:26 -- common/autotest_common.sh@10 -- # set +x 00:04:23.219 ************************************ 00:04:23.219 START TEST json_config_extra_key 00:04:23.219 ************************************ 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.219 07:06:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.219 07:06:26 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.219 --rc genhtml_branch_coverage=1 00:04:23.219 --rc genhtml_function_coverage=1 00:04:23.219 --rc genhtml_legend=1 00:04:23.219 --rc geninfo_all_blocks=1 00:04:23.219 --rc geninfo_unexecuted_blocks=1 00:04:23.220 00:04:23.220 ' 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.220 --rc genhtml_branch_coverage=1 00:04:23.220 --rc genhtml_function_coverage=1 00:04:23.220 --rc genhtml_legend=1 00:04:23.220 --rc geninfo_all_blocks=1 00:04:23.220 --rc geninfo_unexecuted_blocks=1 00:04:23.220 00:04:23.220 ' 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.220 --rc genhtml_branch_coverage=1 00:04:23.220 --rc genhtml_function_coverage=1 00:04:23.220 --rc genhtml_legend=1 00:04:23.220 --rc geninfo_all_blocks=1 00:04:23.220 --rc geninfo_unexecuted_blocks=1 00:04:23.220 00:04:23.220 ' 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.220 --rc genhtml_branch_coverage=1 00:04:23.220 --rc genhtml_function_coverage=1 00:04:23.220 --rc genhtml_legend=1 00:04:23.220 --rc geninfo_all_blocks=1 00:04:23.220 --rc geninfo_unexecuted_blocks=1 00:04:23.220 00:04:23.220 ' 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:23.220 07:06:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.220 07:06:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.220 07:06:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.220 07:06:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.220 07:06:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.220 07:06:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.220 07:06:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.220 07:06:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:23.220 07:06:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.220 07:06:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:23.220 INFO: launching applications... 00:04:23.220 07:06:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2378390 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.220 Waiting for target to run... 00:04:23.220 07:06:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2378390 /var/tmp/spdk_tgt.sock 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2378390 ']' 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:23.220 07:06:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.220 [2024-11-20 07:06:26.504140] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:23.220 [2024-11-20 07:06:26.504253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378390 ] 00:04:23.786 [2024-11-20 07:06:27.040943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.786 [2024-11-20 07:06:27.089005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.351 07:06:27 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:24.351 07:06:27 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:24.351 00:04:24.351 07:06:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:24.351 INFO: shutting down applications... 00:04:24.351 07:06:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2378390 ]] 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2378390 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2378390 00:04:24.351 07:06:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2378390 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.609 07:06:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.609 SPDK target shutdown done 00:04:24.609 07:06:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.609 Success 00:04:24.609 00:04:24.609 real 0m1.688s 00:04:24.609 user 0m1.514s 00:04:24.609 sys 0m0.642s 00:04:24.609 07:06:27 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.609 07:06:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.609 ************************************ 00:04:24.609 END TEST json_config_extra_key 00:04:24.609 ************************************ 00:04:24.609 07:06:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.609 07:06:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.609 07:06:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.609 07:06:28 -- common/autotest_common.sh@10 -- # set +x 00:04:24.868 ************************************ 00:04:24.868 START TEST alias_rpc 00:04:24.868 ************************************ 00:04:24.868 07:06:28 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.868 * Looking for test storage... 00:04:24.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:24.868 07:06:28 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.868 07:06:28 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.868 07:06:28 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.868 07:06:28 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.868 07:06:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.869 --rc genhtml_branch_coverage=1 00:04:24.869 --rc genhtml_function_coverage=1 00:04:24.869 --rc genhtml_legend=1 00:04:24.869 --rc geninfo_all_blocks=1 00:04:24.869 --rc geninfo_unexecuted_blocks=1 00:04:24.869 00:04:24.869 ' 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.869 --rc genhtml_branch_coverage=1 00:04:24.869 --rc genhtml_function_coverage=1 00:04:24.869 --rc genhtml_legend=1 00:04:24.869 --rc geninfo_all_blocks=1 00:04:24.869 --rc geninfo_unexecuted_blocks=1 00:04:24.869 00:04:24.869 ' 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.869 --rc genhtml_branch_coverage=1 00:04:24.869 --rc genhtml_function_coverage=1 00:04:24.869 --rc genhtml_legend=1 00:04:24.869 --rc geninfo_all_blocks=1 00:04:24.869 --rc geninfo_unexecuted_blocks=1 00:04:24.869 00:04:24.869 ' 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.869 --rc genhtml_branch_coverage=1 00:04:24.869 --rc genhtml_function_coverage=1 00:04:24.869 --rc genhtml_legend=1 00:04:24.869 --rc geninfo_all_blocks=1 00:04:24.869 --rc geninfo_unexecuted_blocks=1 00:04:24.869 00:04:24.869 ' 00:04:24.869 07:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.869 07:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2378706 00:04:24.869 07:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.869 07:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2378706 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2378706 ']' 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:24.869 07:06:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.869 [2024-11-20 07:06:28.245853] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:24.869 [2024-11-20 07:06:28.245944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378706 ] 00:04:25.126 [2024-11-20 07:06:28.311458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.126 [2024-11-20 07:06:28.368279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.384 07:06:28 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:25.384 07:06:28 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:25.384 07:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:25.641 07:06:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2378706 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2378706 ']' 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2378706 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2378706 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2378706' 00:04:25.641 killing process with pid 2378706 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@971 -- # kill 2378706 00:04:25.641 07:06:28 alias_rpc -- common/autotest_common.sh@976 -- # wait 2378706 00:04:26.213 00:04:26.213 real 0m1.336s 00:04:26.213 user 0m1.458s 00:04:26.213 sys 0m0.420s 00:04:26.213 07:06:29 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.213 07:06:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.213 ************************************ 00:04:26.213 END TEST alias_rpc 00:04:26.213 ************************************ 00:04:26.213 07:06:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:26.213 07:06:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.213 07:06:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.213 07:06:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.213 07:06:29 -- common/autotest_common.sh@10 -- # set +x 00:04:26.213 ************************************ 00:04:26.213 START TEST spdkcli_tcp 00:04:26.213 ************************************ 00:04:26.213 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.213 * Looking for test storage... 00:04:26.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:26.213 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:26.213 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:26.213 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:26.213 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.213 07:06:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:26.214 07:06:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.214 07:06:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.214 07:06:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.214 07:06:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.214 --rc genhtml_branch_coverage=1 00:04:26.214 --rc genhtml_function_coverage=1 00:04:26.214 --rc genhtml_legend=1 00:04:26.214 --rc geninfo_all_blocks=1 00:04:26.214 --rc geninfo_unexecuted_blocks=1 00:04:26.214 00:04:26.214 ' 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.214 --rc genhtml_branch_coverage=1 00:04:26.214 --rc genhtml_function_coverage=1 00:04:26.214 --rc genhtml_legend=1 00:04:26.214 --rc geninfo_all_blocks=1 00:04:26.214 --rc geninfo_unexecuted_blocks=1 00:04:26.214 00:04:26.214 ' 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.214 --rc genhtml_branch_coverage=1 00:04:26.214 --rc genhtml_function_coverage=1 00:04:26.214 --rc genhtml_legend=1 00:04:26.214 --rc geninfo_all_blocks=1 00:04:26.214 --rc geninfo_unexecuted_blocks=1 00:04:26.214 00:04:26.214 ' 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:26.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.214 --rc genhtml_branch_coverage=1 00:04:26.214 --rc genhtml_function_coverage=1 00:04:26.214 --rc genhtml_legend=1 00:04:26.214 --rc geninfo_all_blocks=1 00:04:26.214 --rc geninfo_unexecuted_blocks=1 00:04:26.214 00:04:26.214 ' 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2378901 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.214 07:06:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2378901 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2378901 ']' 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.214 07:06:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.471 [2024-11-20 07:06:29.649568] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:26.471 [2024-11-20 07:06:29.649672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378901 ] 00:04:26.471 [2024-11-20 07:06:29.731029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.471 [2024-11-20 07:06:29.809124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.471 [2024-11-20 07:06:29.809131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.728 07:06:30 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:26.728 07:06:30 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:26.729 07:06:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2379032 00:04:26.729 07:06:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:26.729 07:06:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:26.986 [ 00:04:26.986 "bdev_malloc_delete", 00:04:26.986 "bdev_malloc_create", 00:04:26.986 "bdev_null_resize", 00:04:26.986 "bdev_null_delete", 00:04:26.986 "bdev_null_create", 00:04:26.986 "bdev_nvme_cuse_unregister", 00:04:26.986 "bdev_nvme_cuse_register", 00:04:26.986 "bdev_opal_new_user", 00:04:26.986 "bdev_opal_set_lock_state", 00:04:26.986 "bdev_opal_delete", 00:04:26.986 "bdev_opal_get_info", 00:04:26.987 "bdev_opal_create", 00:04:26.987 "bdev_nvme_opal_revert", 00:04:26.987 "bdev_nvme_opal_init", 00:04:26.987 "bdev_nvme_send_cmd", 00:04:26.987 "bdev_nvme_set_keys", 00:04:26.987 "bdev_nvme_get_path_iostat", 00:04:26.987 "bdev_nvme_get_mdns_discovery_info", 00:04:26.987 "bdev_nvme_stop_mdns_discovery", 00:04:26.987 "bdev_nvme_start_mdns_discovery", 00:04:26.987 "bdev_nvme_set_multipath_policy", 00:04:26.987 "bdev_nvme_set_preferred_path", 00:04:26.987 "bdev_nvme_get_io_paths", 00:04:26.987 "bdev_nvme_remove_error_injection", 00:04:26.987 "bdev_nvme_add_error_injection", 00:04:26.987 "bdev_nvme_get_discovery_info", 00:04:26.987 "bdev_nvme_stop_discovery", 00:04:26.987 "bdev_nvme_start_discovery", 00:04:26.987 "bdev_nvme_get_controller_health_info", 00:04:26.987 "bdev_nvme_disable_controller", 00:04:26.987 "bdev_nvme_enable_controller", 00:04:26.987 "bdev_nvme_reset_controller", 00:04:26.987 "bdev_nvme_get_transport_statistics", 00:04:26.987 "bdev_nvme_apply_firmware", 00:04:26.987 "bdev_nvme_detach_controller", 00:04:26.987 "bdev_nvme_get_controllers", 00:04:26.987 "bdev_nvme_attach_controller", 00:04:26.987 "bdev_nvme_set_hotplug", 00:04:26.987 "bdev_nvme_set_options", 00:04:26.987 "bdev_passthru_delete", 00:04:26.987 "bdev_passthru_create", 00:04:26.987 "bdev_lvol_set_parent_bdev", 00:04:26.987 "bdev_lvol_set_parent", 00:04:26.987 "bdev_lvol_check_shallow_copy", 00:04:26.987 "bdev_lvol_start_shallow_copy", 00:04:26.987 "bdev_lvol_grow_lvstore", 00:04:26.987 "bdev_lvol_get_lvols", 00:04:26.987 "bdev_lvol_get_lvstores", 00:04:26.987 "bdev_lvol_delete", 00:04:26.987 "bdev_lvol_set_read_only", 00:04:26.987 "bdev_lvol_resize", 00:04:26.987 "bdev_lvol_decouple_parent", 00:04:26.987 "bdev_lvol_inflate", 00:04:26.987 "bdev_lvol_rename", 00:04:26.987 "bdev_lvol_clone_bdev", 00:04:26.987 "bdev_lvol_clone", 00:04:26.987 "bdev_lvol_snapshot", 00:04:26.987 "bdev_lvol_create", 00:04:26.987 "bdev_lvol_delete_lvstore", 00:04:26.987 "bdev_lvol_rename_lvstore", 00:04:26.987 "bdev_lvol_create_lvstore", 00:04:26.987 "bdev_raid_set_options", 00:04:26.987 "bdev_raid_remove_base_bdev", 00:04:26.987 "bdev_raid_add_base_bdev", 00:04:26.987 "bdev_raid_delete", 00:04:26.987 "bdev_raid_create", 00:04:26.987 "bdev_raid_get_bdevs", 00:04:26.987 "bdev_error_inject_error", 00:04:26.987 "bdev_error_delete", 00:04:26.987 "bdev_error_create", 00:04:26.987 "bdev_split_delete", 00:04:26.987 "bdev_split_create", 00:04:26.987 "bdev_delay_delete", 00:04:26.987 "bdev_delay_create", 00:04:26.987 "bdev_delay_update_latency", 00:04:26.987 "bdev_zone_block_delete", 00:04:26.987 "bdev_zone_block_create", 00:04:26.987 "blobfs_create", 00:04:26.987 "blobfs_detect", 00:04:26.987 "blobfs_set_cache_size", 00:04:26.987 "bdev_aio_delete", 00:04:26.987 "bdev_aio_rescan", 00:04:26.987 "bdev_aio_create", 00:04:26.987 "bdev_ftl_set_property", 00:04:26.987 "bdev_ftl_get_properties", 00:04:26.987 "bdev_ftl_get_stats", 00:04:26.987 "bdev_ftl_unmap", 00:04:26.987 "bdev_ftl_unload", 00:04:26.987 "bdev_ftl_delete", 00:04:26.987 "bdev_ftl_load", 00:04:26.987 "bdev_ftl_create", 00:04:26.987 "bdev_virtio_attach_controller", 00:04:26.987 "bdev_virtio_scsi_get_devices", 00:04:26.987 "bdev_virtio_detach_controller", 00:04:26.987 "bdev_virtio_blk_set_hotplug", 00:04:26.987 "bdev_iscsi_delete", 00:04:26.987 "bdev_iscsi_create", 00:04:26.987 "bdev_iscsi_set_options", 00:04:26.987 "accel_error_inject_error", 00:04:26.987 "ioat_scan_accel_module", 00:04:26.987 "dsa_scan_accel_module", 00:04:26.987 "iaa_scan_accel_module", 00:04:26.987 "vfu_virtio_create_fs_endpoint", 00:04:26.987 "vfu_virtio_create_scsi_endpoint", 00:04:26.987 "vfu_virtio_scsi_remove_target", 00:04:26.987 "vfu_virtio_scsi_add_target", 00:04:26.987 "vfu_virtio_create_blk_endpoint", 00:04:26.987 "vfu_virtio_delete_endpoint", 00:04:26.987 "keyring_file_remove_key", 00:04:26.987 "keyring_file_add_key", 00:04:26.987 "keyring_linux_set_options", 00:04:26.987 "fsdev_aio_delete", 00:04:26.987 "fsdev_aio_create", 00:04:26.987 "iscsi_get_histogram", 00:04:26.987 "iscsi_enable_histogram", 00:04:26.987 "iscsi_set_options", 00:04:26.987 "iscsi_get_auth_groups", 00:04:26.987 "iscsi_auth_group_remove_secret", 00:04:26.987 "iscsi_auth_group_add_secret", 00:04:26.987 "iscsi_delete_auth_group", 00:04:26.987 "iscsi_create_auth_group", 00:04:26.987 "iscsi_set_discovery_auth", 00:04:26.987 "iscsi_get_options", 00:04:26.987 "iscsi_target_node_request_logout", 00:04:26.987 "iscsi_target_node_set_redirect", 00:04:26.987 "iscsi_target_node_set_auth", 00:04:26.987 "iscsi_target_node_add_lun", 00:04:26.987 "iscsi_get_stats", 00:04:26.987 "iscsi_get_connections", 00:04:26.987 "iscsi_portal_group_set_auth", 00:04:26.987 "iscsi_start_portal_group", 00:04:26.987 "iscsi_delete_portal_group", 00:04:26.987 "iscsi_create_portal_group", 00:04:26.987 "iscsi_get_portal_groups", 00:04:26.987 "iscsi_delete_target_node", 00:04:26.987 "iscsi_target_node_remove_pg_ig_maps", 00:04:26.987 "iscsi_target_node_add_pg_ig_maps", 00:04:26.987 "iscsi_create_target_node", 00:04:26.987 "iscsi_get_target_nodes", 00:04:26.987 "iscsi_delete_initiator_group", 00:04:26.987 "iscsi_initiator_group_remove_initiators", 00:04:26.987 "iscsi_initiator_group_add_initiators", 00:04:26.987 "iscsi_create_initiator_group", 00:04:26.987 "iscsi_get_initiator_groups", 00:04:26.987 "nvmf_set_crdt", 00:04:26.987 "nvmf_set_config", 00:04:26.987 "nvmf_set_max_subsystems", 00:04:26.987 "nvmf_stop_mdns_prr", 00:04:26.987 "nvmf_publish_mdns_prr", 00:04:26.987 "nvmf_subsystem_get_listeners", 00:04:26.987 "nvmf_subsystem_get_qpairs", 00:04:26.987 "nvmf_subsystem_get_controllers", 00:04:26.987 "nvmf_get_stats", 00:04:26.987 "nvmf_get_transports", 00:04:26.987 "nvmf_create_transport", 00:04:26.987 "nvmf_get_targets", 00:04:26.987 "nvmf_delete_target", 00:04:26.987 "nvmf_create_target", 00:04:26.987 "nvmf_subsystem_allow_any_host", 00:04:26.987 "nvmf_subsystem_set_keys", 00:04:26.987 "nvmf_subsystem_remove_host", 00:04:26.987 "nvmf_subsystem_add_host", 00:04:26.987 "nvmf_ns_remove_host", 00:04:26.987 "nvmf_ns_add_host", 00:04:26.987 "nvmf_subsystem_remove_ns", 00:04:26.987 "nvmf_subsystem_set_ns_ana_group", 00:04:26.987 "nvmf_subsystem_add_ns", 00:04:26.987 "nvmf_subsystem_listener_set_ana_state", 00:04:26.987 "nvmf_discovery_get_referrals", 00:04:26.987 "nvmf_discovery_remove_referral", 00:04:26.987 "nvmf_discovery_add_referral", 00:04:26.988 "nvmf_subsystem_remove_listener", 00:04:26.988 "nvmf_subsystem_add_listener", 00:04:26.988 "nvmf_delete_subsystem", 00:04:26.988 "nvmf_create_subsystem", 00:04:26.988 "nvmf_get_subsystems", 00:04:26.988 "env_dpdk_get_mem_stats", 00:04:26.988 "nbd_get_disks", 00:04:26.988 "nbd_stop_disk", 00:04:26.988 "nbd_start_disk", 00:04:26.988 "ublk_recover_disk", 00:04:26.988 "ublk_get_disks", 00:04:26.988 "ublk_stop_disk", 00:04:26.988 "ublk_start_disk", 00:04:26.988 "ublk_destroy_target", 00:04:26.988 "ublk_create_target", 00:04:26.988 "virtio_blk_create_transport", 00:04:26.988 "virtio_blk_get_transports", 00:04:26.988 "vhost_controller_set_coalescing", 00:04:26.988 "vhost_get_controllers", 00:04:26.988 "vhost_delete_controller", 00:04:26.988 "vhost_create_blk_controller", 00:04:26.988 "vhost_scsi_controller_remove_target", 00:04:26.988 "vhost_scsi_controller_add_target", 00:04:26.988 "vhost_start_scsi_controller", 00:04:26.988 "vhost_create_scsi_controller", 00:04:26.988 "thread_set_cpumask", 00:04:26.988 "scheduler_set_options", 00:04:26.988 "framework_get_governor", 00:04:26.988 "framework_get_scheduler", 00:04:26.988 "framework_set_scheduler", 00:04:26.988 "framework_get_reactors", 00:04:26.988 "thread_get_io_channels", 00:04:26.988 "thread_get_pollers", 00:04:26.988 "thread_get_stats", 00:04:26.988 "framework_monitor_context_switch", 00:04:26.988 "spdk_kill_instance", 00:04:26.988 "log_enable_timestamps", 00:04:26.988 "log_get_flags", 00:04:26.988 "log_clear_flag", 00:04:26.988 "log_set_flag", 00:04:26.988 "log_get_level", 00:04:26.988 "log_set_level", 00:04:26.988 "log_get_print_level", 00:04:26.988 "log_set_print_level", 00:04:26.988 "framework_enable_cpumask_locks", 00:04:26.988 "framework_disable_cpumask_locks", 00:04:26.988 "framework_wait_init", 00:04:26.988 "framework_start_init", 00:04:26.988 "scsi_get_devices", 00:04:26.988 "bdev_get_histogram", 00:04:26.988 "bdev_enable_histogram", 00:04:26.988 "bdev_set_qos_limit", 00:04:26.988 "bdev_set_qd_sampling_period", 00:04:26.988 "bdev_get_bdevs", 00:04:26.988 "bdev_reset_iostat", 00:04:26.988 "bdev_get_iostat", 00:04:26.988 "bdev_examine", 00:04:26.988 "bdev_wait_for_examine", 00:04:26.988 "bdev_set_options", 00:04:26.988 "accel_get_stats", 00:04:26.988 "accel_set_options", 00:04:26.988 "accel_set_driver", 00:04:26.988 "accel_crypto_key_destroy", 00:04:26.988 "accel_crypto_keys_get", 00:04:26.988 "accel_crypto_key_create", 00:04:26.988 "accel_assign_opc", 00:04:26.988 "accel_get_module_info", 00:04:26.988 "accel_get_opc_assignments", 00:04:26.988 "vmd_rescan", 00:04:26.988 "vmd_remove_device", 00:04:26.988 "vmd_enable", 00:04:26.988 "sock_get_default_impl", 00:04:26.988 "sock_set_default_impl", 00:04:26.988 "sock_impl_set_options", 00:04:26.988 "sock_impl_get_options", 00:04:26.988 "iobuf_get_stats", 00:04:26.988 "iobuf_set_options", 00:04:26.988 "keyring_get_keys", 00:04:26.988 "vfu_tgt_set_base_path", 00:04:26.988 "framework_get_pci_devices", 00:04:26.988 "framework_get_config", 00:04:26.988 "framework_get_subsystems", 00:04:26.988 "fsdev_set_opts", 00:04:26.988 "fsdev_get_opts", 00:04:26.988 "trace_get_info", 00:04:26.988 "trace_get_tpoint_group_mask", 00:04:26.988 "trace_disable_tpoint_group", 00:04:26.988 "trace_enable_tpoint_group", 00:04:26.988 "trace_clear_tpoint_mask", 00:04:26.988 "trace_set_tpoint_mask", 00:04:26.988 "notify_get_notifications", 00:04:26.988 "notify_get_types", 00:04:26.988 "spdk_get_version", 00:04:26.988 "rpc_get_methods" 00:04:26.988 ] 00:04:26.988 07:06:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.988 07:06:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:26.988 07:06:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2378901 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2378901 ']' 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2378901 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:26.988 07:06:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2378901 00:04:27.245 07:06:30 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:27.245 07:06:30 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:27.245 07:06:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2378901' 00:04:27.245 killing process with pid 2378901 00:04:27.245 07:06:30 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2378901 00:04:27.245 07:06:30 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2378901 00:04:27.504 00:04:27.504 real 0m1.446s 00:04:27.504 user 0m2.629s 00:04:27.504 sys 0m0.528s 00:04:27.504 07:06:30 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.504 07:06:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.504 ************************************ 00:04:27.504 END TEST spdkcli_tcp 00:04:27.504 ************************************ 00:04:27.504 07:06:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.504 07:06:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.504 07:06:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.504 07:06:30 -- common/autotest_common.sh@10 -- # set +x 00:04:27.504 ************************************ 00:04:27.504 START TEST dpdk_mem_utility 00:04:27.504 ************************************ 00:04:27.504 07:06:30 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.763 * Looking for test storage... 00:04:27.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:27.763 07:06:30 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.763 07:06:30 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.763 07:06:30 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.763 07:06:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.763 --rc genhtml_branch_coverage=1 00:04:27.763 --rc genhtml_function_coverage=1 00:04:27.763 --rc genhtml_legend=1 00:04:27.763 --rc geninfo_all_blocks=1 00:04:27.763 --rc geninfo_unexecuted_blocks=1 00:04:27.763 00:04:27.763 ' 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.763 --rc genhtml_branch_coverage=1 00:04:27.763 --rc genhtml_function_coverage=1 00:04:27.763 --rc genhtml_legend=1 00:04:27.763 --rc geninfo_all_blocks=1 00:04:27.763 --rc geninfo_unexecuted_blocks=1 00:04:27.763 00:04:27.763 ' 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.763 --rc genhtml_branch_coverage=1 00:04:27.763 --rc genhtml_function_coverage=1 00:04:27.763 --rc genhtml_legend=1 00:04:27.763 --rc geninfo_all_blocks=1 00:04:27.763 --rc geninfo_unexecuted_blocks=1 00:04:27.763 00:04:27.763 ' 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.763 --rc genhtml_branch_coverage=1 00:04:27.763 --rc genhtml_function_coverage=1 00:04:27.763 --rc genhtml_legend=1 00:04:27.763 --rc geninfo_all_blocks=1 00:04:27.763 --rc geninfo_unexecuted_blocks=1 00:04:27.763 00:04:27.763 ' 00:04:27.763 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:27.763 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2379208 00:04:27.763 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.763 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2379208 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2379208 ']' 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.763 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.763 [2024-11-20 07:06:31.131082] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:27.763 [2024-11-20 07:06:31.131170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379208 ] 00:04:28.022 [2024-11-20 07:06:31.197072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.022 [2024-11-20 07:06:31.255118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.280 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:28.280 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:28.280 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.280 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.280 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.280 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.280 { 00:04:28.280 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.280 } 00:04:28.280 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.280 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.280 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:28.280 1 heaps totaling size 818.000000 MiB 00:04:28.280 size: 818.000000 MiB heap id: 0 00:04:28.280 end heaps---------- 00:04:28.280 9 mempools totaling size 603.782043 MiB 00:04:28.280 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.280 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.280 size: 100.555481 MiB name: bdev_io_2379208 00:04:28.280 size: 50.003479 MiB name: msgpool_2379208 00:04:28.280 size: 36.509338 MiB name: fsdev_io_2379208 00:04:28.280 size: 21.763794 MiB name: PDU_Pool 00:04:28.280 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.280 size: 4.133484 MiB name: evtpool_2379208 00:04:28.280 size: 0.026123 MiB name: Session_Pool 00:04:28.280 end mempools------- 00:04:28.280 6 memzones totaling size 4.142822 MiB 00:04:28.280 size: 1.000366 MiB name: RG_ring_0_2379208 00:04:28.280 size: 1.000366 MiB name: RG_ring_1_2379208 00:04:28.280 size: 1.000366 MiB name: RG_ring_4_2379208 00:04:28.280 size: 1.000366 MiB name: RG_ring_5_2379208 00:04:28.280 size: 0.125366 MiB name: RG_ring_2_2379208 00:04:28.280 size: 0.015991 MiB name: RG_ring_3_2379208 00:04:28.280 end memzones------- 00:04:28.280 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.280 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:28.280 list of free elements. size: 10.852478 MiB 00:04:28.280 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:28.280 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:28.280 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:28.280 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:28.280 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:28.280 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:28.280 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:28.280 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:28.280 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:28.280 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:28.280 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:28.280 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:28.280 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:28.281 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:28.281 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:28.281 list of standard malloc elements. size: 199.218628 MiB 00:04:28.281 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:28.281 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:28.281 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:28.281 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:28.281 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:28.281 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.281 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:28.281 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.281 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:28.281 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:28.281 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:28.281 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:28.281 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:28.281 list of memzone associated elements. size: 607.928894 MiB 00:04:28.281 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:28.281 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.281 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:28.281 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.281 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:28.281 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2379208_0 00:04:28.281 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:28.281 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2379208_0 00:04:28.281 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:28.281 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2379208_0 00:04:28.281 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:28.281 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.281 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:28.281 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.281 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:28.281 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2379208_0 00:04:28.281 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:28.281 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2379208 00:04:28.281 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.281 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2379208 00:04:28.281 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:28.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.281 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:28.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.281 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:28.281 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.281 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:28.281 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.281 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:28.281 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2379208 00:04:28.281 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:28.281 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2379208 00:04:28.281 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:28.281 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2379208 00:04:28.281 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:28.281 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2379208 00:04:28.281 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:28.281 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2379208 00:04:28.281 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:28.281 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2379208 00:04:28.281 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:28.281 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.281 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:28.281 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.281 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:28.281 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.281 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:28.281 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2379208 00:04:28.281 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:28.281 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2379208 00:04:28.281 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:28.281 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.281 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:28.281 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.281 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:28.281 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2379208 00:04:28.281 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:28.281 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.281 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:28.281 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2379208 00:04:28.281 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:28.281 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2379208 00:04:28.281 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:28.281 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2379208 00:04:28.281 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:28.281 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.281 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.281 07:06:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2379208 00:04:28.281 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2379208 ']' 00:04:28.281 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2379208 00:04:28.281 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:28.281 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:28.281 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2379208 00:04:28.282 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:28.282 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:28.282 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2379208' 00:04:28.282 killing process with pid 2379208 00:04:28.282 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2379208 00:04:28.282 07:06:31 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2379208 00:04:28.848 00:04:28.848 real 0m1.178s 00:04:28.848 user 0m1.144s 00:04:28.848 sys 0m0.435s 00:04:28.848 07:06:32 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.848 07:06:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.848 ************************************ 00:04:28.848 END TEST dpdk_mem_utility 00:04:28.848 ************************************ 00:04:28.848 07:06:32 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:28.848 07:06:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.848 07:06:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.848 07:06:32 -- common/autotest_common.sh@10 -- # set +x 00:04:28.848 ************************************ 00:04:28.848 START TEST event 00:04:28.848 ************************************ 00:04:28.848 07:06:32 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:28.848 * Looking for test storage... 00:04:28.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.848 07:06:32 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.848 07:06:32 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.848 07:06:32 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.106 07:06:32 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.106 07:06:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.106 07:06:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.106 07:06:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.106 07:06:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.106 07:06:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.106 07:06:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.106 07:06:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.106 07:06:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.106 07:06:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.106 07:06:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.106 07:06:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.106 07:06:32 event -- scripts/common.sh@344 -- # case "$op" in 00:04:29.106 07:06:32 event -- scripts/common.sh@345 -- # : 1 00:04:29.106 07:06:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.106 07:06:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.106 07:06:32 event -- scripts/common.sh@365 -- # decimal 1 00:04:29.106 07:06:32 event -- scripts/common.sh@353 -- # local d=1 00:04:29.106 07:06:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.106 07:06:32 event -- scripts/common.sh@355 -- # echo 1 00:04:29.106 07:06:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.106 07:06:32 event -- scripts/common.sh@366 -- # decimal 2 00:04:29.106 07:06:32 event -- scripts/common.sh@353 -- # local d=2 00:04:29.106 07:06:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.106 07:06:32 event -- scripts/common.sh@355 -- # echo 2 00:04:29.106 07:06:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.106 07:06:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.106 07:06:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.106 07:06:32 event -- scripts/common.sh@368 -- # return 0 00:04:29.106 07:06:32 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.106 07:06:32 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.106 --rc genhtml_branch_coverage=1 00:04:29.106 --rc genhtml_function_coverage=1 00:04:29.106 --rc genhtml_legend=1 00:04:29.106 --rc geninfo_all_blocks=1 00:04:29.106 --rc geninfo_unexecuted_blocks=1 00:04:29.106 00:04:29.106 ' 00:04:29.106 07:06:32 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.106 --rc genhtml_branch_coverage=1 00:04:29.106 --rc genhtml_function_coverage=1 00:04:29.106 --rc genhtml_legend=1 00:04:29.106 --rc geninfo_all_blocks=1 00:04:29.106 --rc geninfo_unexecuted_blocks=1 00:04:29.106 00:04:29.106 ' 00:04:29.106 07:06:32 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.106 --rc genhtml_branch_coverage=1 00:04:29.106 --rc genhtml_function_coverage=1 00:04:29.106 --rc genhtml_legend=1 00:04:29.106 --rc geninfo_all_blocks=1 00:04:29.106 --rc geninfo_unexecuted_blocks=1 00:04:29.106 00:04:29.106 ' 00:04:29.106 07:06:32 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.106 --rc genhtml_branch_coverage=1 00:04:29.106 --rc genhtml_function_coverage=1 00:04:29.106 --rc genhtml_legend=1 00:04:29.106 --rc geninfo_all_blocks=1 00:04:29.106 --rc geninfo_unexecuted_blocks=1 00:04:29.106 00:04:29.106 ' 00:04:29.107 07:06:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:29.107 07:06:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.107 07:06:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.107 07:06:32 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:29.107 07:06:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.107 07:06:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.107 ************************************ 00:04:29.107 START TEST event_perf 00:04:29.107 ************************************ 00:04:29.107 07:06:32 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.107 Running I/O for 1 seconds...[2024-11-20 07:06:32.337227] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:29.107 [2024-11-20 07:06:32.337294] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379434 ] 00:04:29.107 [2024-11-20 07:06:32.402652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.107 [2024-11-20 07:06:32.466999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.107 [2024-11-20 07:06:32.467106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.107 [2024-11-20 07:06:32.467200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.107 [2024-11-20 07:06:32.467210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.539 Running I/O for 1 seconds... 00:04:30.539 lcore 0: 230457 00:04:30.539 lcore 1: 230458 00:04:30.539 lcore 2: 230457 00:04:30.539 lcore 3: 230457 00:04:30.539 done. 00:04:30.539 00:04:30.539 real 0m1.211s 00:04:30.539 user 0m4.135s 00:04:30.539 sys 0m0.070s 00:04:30.539 07:06:33 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.539 07:06:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.539 ************************************ 00:04:30.539 END TEST event_perf 00:04:30.539 ************************************ 00:04:30.539 07:06:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.539 07:06:33 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:30.539 07:06:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.540 07:06:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.540 ************************************ 00:04:30.540 START TEST event_reactor 00:04:30.540 ************************************ 00:04:30.540 07:06:33 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.540 [2024-11-20 07:06:33.601436] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:30.540 [2024-11-20 07:06:33.601503] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379592 ] 00:04:30.540 [2024-11-20 07:06:33.670772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.540 [2024-11-20 07:06:33.730916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.474 test_start 00:04:31.474 oneshot 00:04:31.474 tick 100 00:04:31.474 tick 100 00:04:31.474 tick 250 00:04:31.474 tick 100 00:04:31.474 tick 100 00:04:31.474 tick 100 00:04:31.474 tick 250 00:04:31.474 tick 500 00:04:31.474 tick 100 00:04:31.474 tick 100 00:04:31.474 tick 250 00:04:31.474 tick 100 00:04:31.474 tick 100 00:04:31.474 test_end 00:04:31.474 00:04:31.474 real 0m1.204s 00:04:31.474 user 0m1.134s 00:04:31.474 sys 0m0.066s 00:04:31.474 07:06:34 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.474 07:06:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:31.474 ************************************ 00:04:31.474 END TEST event_reactor 00:04:31.474 ************************************ 00:04:31.474 07:06:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.474 07:06:34 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:31.474 07:06:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.474 07:06:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.474 ************************************ 00:04:31.474 START TEST event_reactor_perf 00:04:31.474 ************************************ 00:04:31.474 07:06:34 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.474 [2024-11-20 07:06:34.854100] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:31.474 [2024-11-20 07:06:34.854170] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379753 ] 00:04:31.732 [2024-11-20 07:06:34.919736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.732 [2024-11-20 07:06:34.976660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.666 test_start 00:04:32.666 test_end 00:04:32.666 Performance: 450120 events per second 00:04:32.666 00:04:32.666 real 0m1.199s 00:04:32.666 user 0m1.126s 00:04:32.666 sys 0m0.069s 00:04:32.666 07:06:36 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.666 07:06:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:32.666 ************************************ 00:04:32.666 END TEST event_reactor_perf 00:04:32.666 ************************************ 00:04:32.666 07:06:36 event -- event/event.sh@49 -- # uname -s 00:04:32.667 07:06:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:32.667 07:06:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:32.667 07:06:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.667 07:06:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.667 07:06:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.667 ************************************ 00:04:32.667 START TEST event_scheduler 00:04:32.667 ************************************ 00:04:32.667 07:06:36 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:32.925 * Looking for test storage... 00:04:32.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:32.925 07:06:36 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.925 07:06:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.925 07:06:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.925 07:06:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.925 07:06:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.925 07:06:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.925 07:06:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.925 07:06:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.925 07:06:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.925 07:06:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.926 07:06:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.926 --rc genhtml_branch_coverage=1 00:04:32.926 --rc genhtml_function_coverage=1 00:04:32.926 --rc genhtml_legend=1 00:04:32.926 --rc geninfo_all_blocks=1 00:04:32.926 --rc geninfo_unexecuted_blocks=1 00:04:32.926 00:04:32.926 ' 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.926 --rc genhtml_branch_coverage=1 00:04:32.926 --rc genhtml_function_coverage=1 00:04:32.926 --rc genhtml_legend=1 00:04:32.926 --rc geninfo_all_blocks=1 00:04:32.926 --rc geninfo_unexecuted_blocks=1 00:04:32.926 00:04:32.926 ' 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.926 --rc genhtml_branch_coverage=1 00:04:32.926 --rc genhtml_function_coverage=1 00:04:32.926 --rc genhtml_legend=1 00:04:32.926 --rc geninfo_all_blocks=1 00:04:32.926 --rc geninfo_unexecuted_blocks=1 00:04:32.926 00:04:32.926 ' 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.926 --rc genhtml_branch_coverage=1 00:04:32.926 --rc genhtml_function_coverage=1 00:04:32.926 --rc genhtml_legend=1 00:04:32.926 --rc geninfo_all_blocks=1 00:04:32.926 --rc geninfo_unexecuted_blocks=1 00:04:32.926 00:04:32.926 ' 00:04:32.926 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:32.926 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2379943 00:04:32.926 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:32.926 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.926 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2379943 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2379943 ']' 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.926 07:06:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.926 [2024-11-20 07:06:36.282958] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:32.926 [2024-11-20 07:06:36.283048] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379943 ] 00:04:32.926 [2024-11-20 07:06:36.349112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.184 [2024-11-20 07:06:36.412575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.184 [2024-11-20 07:06:36.412633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.184 [2024-11-20 07:06:36.412699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.184 [2024-11-20 07:06:36.412704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:33.184 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.184 [2024-11-20 07:06:36.533673] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:33.184 [2024-11-20 07:06:36.533698] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:33.184 [2024-11-20 07:06:36.533714] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:33.184 [2024-11-20 07:06:36.533740] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:33.184 [2024-11-20 07:06:36.533750] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.184 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.184 07:06:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 [2024-11-20 07:06:36.638672] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:33.443 07:06:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:33.443 07:06:36 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.443 07:06:36 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 ************************************ 00:04:33.443 START TEST scheduler_create_thread 00:04:33.443 ************************************ 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 2 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 3 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 4 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 5 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 6 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 7 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 8 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 9 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 10 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.443 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.444 07:06:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.009 07:06:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.009 00:04:34.009 real 0m0.592s 00:04:34.009 user 0m0.007s 00:04:34.009 sys 0m0.007s 00:04:34.009 07:06:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.009 07:06:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.009 ************************************ 00:04:34.009 END TEST scheduler_create_thread 00:04:34.009 ************************************ 00:04:34.009 07:06:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:34.009 07:06:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2379943 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2379943 ']' 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2379943 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2379943 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2379943' 00:04:34.009 killing process with pid 2379943 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2379943 00:04:34.009 07:06:37 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2379943 00:04:34.575 [2024-11-20 07:06:37.738976] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:34.575 00:04:34.575 real 0m1.861s 00:04:34.575 user 0m2.577s 00:04:34.575 sys 0m0.348s 00:04:34.575 07:06:37 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.575 07:06:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.575 ************************************ 00:04:34.575 END TEST event_scheduler 00:04:34.575 ************************************ 00:04:34.575 07:06:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:34.575 07:06:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:34.575 07:06:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.575 07:06:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.575 07:06:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.575 ************************************ 00:04:34.575 START TEST app_repeat 00:04:34.575 ************************************ 00:04:34.575 07:06:38 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:34.575 07:06:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.575 07:06:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2380254 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2380254' 00:04:34.833 Process app_repeat pid: 2380254 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:34.833 spdk_app_start Round 0 00:04:34.833 07:06:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2380254 /var/tmp/spdk-nbd.sock 00:04:34.833 07:06:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2380254 ']' 00:04:34.833 07:06:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.833 07:06:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.833 07:06:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.833 07:06:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.833 07:06:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.833 [2024-11-20 07:06:38.030086] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:34.833 [2024-11-20 07:06:38.030154] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380254 ] 00:04:34.833 [2024-11-20 07:06:38.097082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.833 [2024-11-20 07:06:38.155617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.833 [2024-11-20 07:06:38.155622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.091 07:06:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.091 07:06:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:35.091 07:06:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.349 Malloc0 00:04:35.349 07:06:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:35.607 Malloc1 00:04:35.607 07:06:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.607 07:06:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.865 /dev/nbd0 00:04:35.865 07:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.865 07:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.865 1+0 records in 00:04:35.865 1+0 records out 00:04:35.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016695 s, 24.5 MB/s 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:35.865 07:06:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:35.865 07:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.865 07:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.865 07:06:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:36.123 /dev/nbd1 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:36.123 1+0 records in 00:04:36.123 1+0 records out 00:04:36.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213543 s, 19.2 MB/s 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:36.123 07:06:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.123 07:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.381 07:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:36.381 { 00:04:36.381 "nbd_device": "/dev/nbd0", 00:04:36.381 "bdev_name": "Malloc0" 00:04:36.381 }, 00:04:36.381 { 00:04:36.381 "nbd_device": "/dev/nbd1", 00:04:36.381 "bdev_name": "Malloc1" 00:04:36.381 } 00:04:36.381 ]' 00:04:36.381 07:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:36.381 { 00:04:36.381 "nbd_device": "/dev/nbd0", 00:04:36.381 "bdev_name": "Malloc0" 00:04:36.381 }, 00:04:36.381 { 00:04:36.381 "nbd_device": "/dev/nbd1", 00:04:36.381 "bdev_name": "Malloc1" 00:04:36.381 } 00:04:36.381 ]' 00:04:36.381 07:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:36.638 /dev/nbd1' 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:36.638 /dev/nbd1' 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:36.638 256+0 records in 00:04:36.638 256+0 records out 00:04:36.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481386 s, 218 MB/s 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.638 07:06:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:36.638 256+0 records in 00:04:36.638 256+0 records out 00:04:36.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194636 s, 53.9 MB/s 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:36.639 256+0 records in 00:04:36.639 256+0 records out 00:04:36.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221244 s, 47.4 MB/s 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.639 07:06:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:36.896 07:06:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.158 07:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:37.416 07:06:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:37.416 07:06:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:37.673 07:06:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.931 [2024-11-20 07:06:41.317792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.189 [2024-11-20 07:06:41.376085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.189 [2024-11-20 07:06:41.376085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.189 [2024-11-20 07:06:41.431533] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:38.189 [2024-11-20 07:06:41.431588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.715 07:06:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.715 07:06:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:40.715 spdk_app_start Round 1 00:04:40.715 07:06:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2380254 /var/tmp/spdk-nbd.sock 00:04:40.715 07:06:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2380254 ']' 00:04:40.715 07:06:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.715 07:06:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.715 07:06:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.715 07:06:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.715 07:06:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.972 07:06:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.972 07:06:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:40.972 07:06:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.230 Malloc0 00:04:41.230 07:06:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.801 Malloc1 00:04:41.801 07:06:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.801 07:06:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:41.801 /dev/nbd0 00:04:42.059 07:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.059 07:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:42.059 07:06:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.059 1+0 records in 00:04:42.059 1+0 records out 00:04:42.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249271 s, 16.4 MB/s 00:04:42.060 07:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.060 07:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:42.060 07:06:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.060 07:06:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:42.060 07:06:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:42.060 07:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.060 07:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.060 07:06:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.318 /dev/nbd1 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.318 1+0 records in 00:04:42.318 1+0 records out 00:04:42.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246797 s, 16.6 MB/s 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:42.318 07:06:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.318 07:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.576 { 00:04:42.576 "nbd_device": "/dev/nbd0", 00:04:42.576 "bdev_name": "Malloc0" 00:04:42.576 }, 00:04:42.576 { 00:04:42.576 "nbd_device": "/dev/nbd1", 00:04:42.576 "bdev_name": "Malloc1" 00:04:42.576 } 00:04:42.576 ]' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.576 { 00:04:42.576 "nbd_device": "/dev/nbd0", 00:04:42.576 "bdev_name": "Malloc0" 00:04:42.576 }, 00:04:42.576 { 00:04:42.576 "nbd_device": "/dev/nbd1", 00:04:42.576 "bdev_name": "Malloc1" 00:04:42.576 } 00:04:42.576 ]' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.576 /dev/nbd1' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.576 /dev/nbd1' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.576 256+0 records in 00:04:42.576 256+0 records out 00:04:42.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501586 s, 209 MB/s 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.576 256+0 records in 00:04:42.576 256+0 records out 00:04:42.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199705 s, 52.5 MB/s 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:42.576 256+0 records in 00:04:42.576 256+0 records out 00:04:42.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219159 s, 47.8 MB/s 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.576 07:06:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.834 07:06:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.399 07:06:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.400 07:06:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.400 07:06:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.400 07:06:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.400 07:06:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.400 07:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.400 07:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.658 07:06:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.658 07:06:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:43.916 07:06:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.173 [2024-11-20 07:06:47.375446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.173 [2024-11-20 07:06:47.429267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.173 [2024-11-20 07:06:47.429268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.173 [2024-11-20 07:06:47.490110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.173 [2024-11-20 07:06:47.490176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.454 07:06:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.454 07:06:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:47.454 spdk_app_start Round 2 00:04:47.454 07:06:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2380254 /var/tmp/spdk-nbd.sock 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2380254 ']' 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.454 07:06:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:47.454 07:06:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.454 Malloc0 00:04:47.454 07:06:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.711 Malloc1 00:04:47.711 07:06:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.711 07:06:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.969 /dev/nbd0 00:04:47.969 07:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.969 07:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.969 1+0 records in 00:04:47.969 1+0 records out 00:04:47.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253212 s, 16.2 MB/s 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:47.969 07:06:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:47.969 07:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.969 07:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.969 07:06:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.534 /dev/nbd1 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.534 1+0 records in 00:04:48.534 1+0 records out 00:04:48.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177675 s, 23.1 MB/s 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:48.534 07:06:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.534 07:06:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.793 07:06:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.793 { 00:04:48.793 "nbd_device": "/dev/nbd0", 00:04:48.793 "bdev_name": "Malloc0" 00:04:48.793 }, 00:04:48.793 { 00:04:48.793 "nbd_device": "/dev/nbd1", 00:04:48.793 "bdev_name": "Malloc1" 00:04:48.793 } 00:04:48.793 ]' 00:04:48.793 07:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.793 { 00:04:48.793 "nbd_device": "/dev/nbd0", 00:04:48.793 "bdev_name": "Malloc0" 00:04:48.793 }, 00:04:48.793 { 00:04:48.793 "nbd_device": "/dev/nbd1", 00:04:48.793 "bdev_name": "Malloc1" 00:04:48.793 } 00:04:48.793 ]' 00:04:48.793 07:06:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.793 /dev/nbd1' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.793 /dev/nbd1' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.793 256+0 records in 00:04:48.793 256+0 records out 00:04:48.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526335 s, 199 MB/s 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.793 256+0 records in 00:04:48.793 256+0 records out 00:04:48.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201199 s, 52.1 MB/s 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.793 256+0 records in 00:04:48.793 256+0 records out 00:04:48.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220865 s, 47.5 MB/s 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.793 07:06:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.051 07:06:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.308 07:06:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.567 07:06:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.567 07:06:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.133 07:06:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.133 [2024-11-20 07:06:53.516855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.391 [2024-11-20 07:06:53.577221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.391 [2024-11-20 07:06:53.577225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.391 [2024-11-20 07:06:53.637414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.391 [2024-11-20 07:06:53.637482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.917 07:06:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2380254 /var/tmp/spdk-nbd.sock 00:04:52.917 07:06:56 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2380254 ']' 00:04:52.917 07:06:56 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.917 07:06:56 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.917 07:06:56 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.917 07:06:56 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.917 07:06:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:53.175 07:06:56 event.app_repeat -- event/event.sh@39 -- # killprocess 2380254 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2380254 ']' 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2380254 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2380254 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2380254' 00:04:53.175 killing process with pid 2380254 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2380254 00:04:53.175 07:06:56 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2380254 00:04:53.433 spdk_app_start is called in Round 0. 00:04:53.433 Shutdown signal received, stop current app iteration 00:04:53.433 Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 reinitialization... 00:04:53.433 spdk_app_start is called in Round 1. 00:04:53.433 Shutdown signal received, stop current app iteration 00:04:53.433 Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 reinitialization... 00:04:53.433 spdk_app_start is called in Round 2. 00:04:53.433 Shutdown signal received, stop current app iteration 00:04:53.433 Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 reinitialization... 00:04:53.433 spdk_app_start is called in Round 3. 00:04:53.433 Shutdown signal received, stop current app iteration 00:04:53.433 07:06:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:53.433 07:06:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:53.433 00:04:53.433 real 0m18.798s 00:04:53.433 user 0m41.561s 00:04:53.433 sys 0m3.201s 00:04:53.433 07:06:56 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.433 07:06:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.433 ************************************ 00:04:53.433 END TEST app_repeat 00:04:53.433 ************************************ 00:04:53.433 07:06:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:53.433 07:06:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:53.433 07:06:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.433 07:06:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.433 07:06:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.433 ************************************ 00:04:53.433 START TEST cpu_locks 00:04:53.433 ************************************ 00:04:53.433 07:06:56 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:53.691 * Looking for test storage... 00:04:53.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.691 07:06:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.691 --rc genhtml_branch_coverage=1 00:04:53.691 --rc genhtml_function_coverage=1 00:04:53.691 --rc genhtml_legend=1 00:04:53.691 --rc geninfo_all_blocks=1 00:04:53.691 --rc geninfo_unexecuted_blocks=1 00:04:53.691 00:04:53.691 ' 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.691 --rc genhtml_branch_coverage=1 00:04:53.691 --rc genhtml_function_coverage=1 00:04:53.691 --rc genhtml_legend=1 00:04:53.691 --rc geninfo_all_blocks=1 00:04:53.691 --rc geninfo_unexecuted_blocks=1 00:04:53.691 00:04:53.691 ' 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.691 --rc genhtml_branch_coverage=1 00:04:53.691 --rc genhtml_function_coverage=1 00:04:53.691 --rc genhtml_legend=1 00:04:53.691 --rc geninfo_all_blocks=1 00:04:53.691 --rc geninfo_unexecuted_blocks=1 00:04:53.691 00:04:53.691 ' 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.691 --rc genhtml_branch_coverage=1 00:04:53.691 --rc genhtml_function_coverage=1 00:04:53.691 --rc genhtml_legend=1 00:04:53.691 --rc geninfo_all_blocks=1 00:04:53.691 --rc geninfo_unexecuted_blocks=1 00:04:53.691 00:04:53.691 ' 00:04:53.691 07:06:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:53.691 07:06:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:53.691 07:06:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:53.691 07:06:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.691 07:06:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.691 ************************************ 00:04:53.691 START TEST default_locks 00:04:53.691 ************************************ 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2382739 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2382739 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2382739 ']' 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.691 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.692 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.692 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.692 [2024-11-20 07:06:57.066965] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:53.692 [2024-11-20 07:06:57.067056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382739 ] 00:04:53.949 [2024-11-20 07:06:57.132883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.949 [2024-11-20 07:06:57.193320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.207 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.207 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:54.207 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2382739 00:04:54.207 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2382739 00:04:54.207 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.464 lslocks: write error 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2382739 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2382739 ']' 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2382739 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2382739 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2382739' 00:04:54.464 killing process with pid 2382739 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2382739 00:04:54.464 07:06:57 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2382739 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2382739 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2382739 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2382739 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2382739 ']' 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.029 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2382739) - No such process 00:04:55.030 ERROR: process (pid: 2382739) is no longer running 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:55.030 00:04:55.030 real 0m1.167s 00:04:55.030 user 0m1.152s 00:04:55.030 sys 0m0.488s 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.030 07:06:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.030 ************************************ 00:04:55.030 END TEST default_locks 00:04:55.030 ************************************ 00:04:55.030 07:06:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:55.030 07:06:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.030 07:06:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.030 07:06:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.030 ************************************ 00:04:55.030 START TEST default_locks_via_rpc 00:04:55.030 ************************************ 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2382907 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2382907 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2382907 ']' 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.030 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.030 [2024-11-20 07:06:58.285389] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:55.030 [2024-11-20 07:06:58.285469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382907 ] 00:04:55.030 [2024-11-20 07:06:58.350462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.030 [2024-11-20 07:06:58.410214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:55.287 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2382907 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2382907 00:04:55.288 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.852 07:06:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2382907 00:04:55.852 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2382907 ']' 00:04:55.852 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2382907 00:04:55.852 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:55.852 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.852 07:06:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2382907 00:04:55.852 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.852 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.852 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2382907' 00:04:55.852 killing process with pid 2382907 00:04:55.852 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2382907 00:04:55.852 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2382907 00:04:56.109 00:04:56.109 real 0m1.201s 00:04:56.109 user 0m1.176s 00:04:56.109 sys 0m0.492s 00:04:56.109 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.109 07:06:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.109 ************************************ 00:04:56.109 END TEST default_locks_via_rpc 00:04:56.109 ************************************ 00:04:56.109 07:06:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:56.109 07:06:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.109 07:06:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.109 07:06:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.109 ************************************ 00:04:56.109 START TEST non_locking_app_on_locked_coremask 00:04:56.109 ************************************ 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2383069 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2383069 /var/tmp/spdk.sock 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2383069 ']' 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.109 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.109 [2024-11-20 07:06:59.536569] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:56.109 [2024-11-20 07:06:59.536668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383069 ] 00:04:56.367 [2024-11-20 07:06:59.602811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.367 [2024-11-20 07:06:59.660802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2383077 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2383077 /var/tmp/spdk2.sock 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2383077 ']' 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.625 07:06:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.625 [2024-11-20 07:06:59.988033] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:56.625 [2024-11-20 07:06:59.988118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383077 ] 00:04:56.882 [2024-11-20 07:07:00.094909] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.882 [2024-11-20 07:07:00.094939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.882 [2024-11-20 07:07:00.217204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.901 07:07:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.901 07:07:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:57.901 07:07:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2383069 00:04:57.901 07:07:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2383069 00:04:57.901 07:07:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.158 lslocks: write error 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2383069 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2383069 ']' 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2383069 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2383069 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2383069' 00:04:58.158 killing process with pid 2383069 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2383069 00:04:58.158 07:07:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2383069 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2383077 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2383077 ']' 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2383077 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2383077 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2383077' 00:04:59.091 killing process with pid 2383077 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2383077 00:04:59.091 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2383077 00:04:59.350 00:04:59.350 real 0m3.220s 00:04:59.350 user 0m3.445s 00:04:59.350 sys 0m1.044s 00:04:59.350 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.350 07:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.350 ************************************ 00:04:59.350 END TEST non_locking_app_on_locked_coremask 00:04:59.350 ************************************ 00:04:59.350 07:07:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:59.350 07:07:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.350 07:07:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.350 07:07:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.350 ************************************ 00:04:59.350 START TEST locking_app_on_unlocked_coremask 00:04:59.350 ************************************ 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2383506 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2383506 /var/tmp/spdk.sock 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2383506 ']' 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.350 07:07:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 [2024-11-20 07:07:02.809666] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:59.608 [2024-11-20 07:07:02.809763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383506 ] 00:04:59.608 [2024-11-20 07:07:02.873240] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.608 [2024-11-20 07:07:02.873270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.608 [2024-11-20 07:07:02.926458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2383514 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2383514 /var/tmp/spdk2.sock 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2383514 ']' 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.866 07:07:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.866 [2024-11-20 07:07:03.244187] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:04:59.866 [2024-11-20 07:07:03.244268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383514 ] 00:05:00.123 [2024-11-20 07:07:03.344780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.123 [2024-11-20 07:07:03.461788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.056 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.056 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:01.056 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2383514 00:05:01.056 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2383514 00:05:01.056 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.314 lslocks: write error 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2383506 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2383506 ']' 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2383506 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2383506 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2383506' 00:05:01.314 killing process with pid 2383506 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2383506 00:05:01.314 07:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2383506 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2383514 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2383514 ']' 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2383514 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2383514 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2383514' 00:05:02.246 killing process with pid 2383514 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2383514 00:05:02.246 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2383514 00:05:02.504 00:05:02.504 real 0m3.124s 00:05:02.504 user 0m3.338s 00:05:02.504 sys 0m1.001s 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.504 ************************************ 00:05:02.504 END TEST locking_app_on_unlocked_coremask 00:05:02.504 ************************************ 00:05:02.504 07:07:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:02.504 07:07:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.504 07:07:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.504 07:07:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.504 ************************************ 00:05:02.504 START TEST locking_app_on_locked_coremask 00:05:02.504 ************************************ 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2383940 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2383940 /var/tmp/spdk.sock 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2383940 ']' 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.504 07:07:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.762 [2024-11-20 07:07:05.987815] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:02.762 [2024-11-20 07:07:05.987912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383940 ] 00:05:02.762 [2024-11-20 07:07:06.052443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.762 [2024-11-20 07:07:06.105294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2383949 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2383949 /var/tmp/spdk2.sock 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2383949 /var/tmp/spdk2.sock 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2383949 /var/tmp/spdk2.sock 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2383949 ']' 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.019 07:07:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.019 [2024-11-20 07:07:06.422864] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:03.019 [2024-11-20 07:07:06.422954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383949 ] 00:05:03.277 [2024-11-20 07:07:06.524557] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2383940 has claimed it. 00:05:03.277 [2024-11-20 07:07:06.524630] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:03.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2383949) - No such process 00:05:03.841 ERROR: process (pid: 2383949) is no longer running 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2383940 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2383940 00:05:03.841 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.098 lslocks: write error 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2383940 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2383940 ']' 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2383940 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2383940 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2383940' 00:05:04.098 killing process with pid 2383940 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2383940 00:05:04.098 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2383940 00:05:04.662 00:05:04.662 real 0m1.926s 00:05:04.662 user 0m2.125s 00:05:04.662 sys 0m0.610s 00:05:04.662 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.662 07:07:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 ************************************ 00:05:04.662 END TEST locking_app_on_locked_coremask 00:05:04.662 ************************************ 00:05:04.662 07:07:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:04.662 07:07:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.662 07:07:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.662 07:07:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 ************************************ 00:05:04.662 START TEST locking_overlapped_coremask 00:05:04.662 ************************************ 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2384118 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2384118 /var/tmp/spdk.sock 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2384118 ']' 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.662 07:07:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.662 [2024-11-20 07:07:07.966462] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:04.662 [2024-11-20 07:07:07.966546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384118 ] 00:05:04.662 [2024-11-20 07:07:08.035943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:04.919 [2024-11-20 07:07:08.098523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.919 [2024-11-20 07:07:08.098582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.919 [2024-11-20 07:07:08.098587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2384249 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2384249 /var/tmp/spdk2.sock 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2384249 /var/tmp/spdk2.sock 00:05:05.176 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2384249 /var/tmp/spdk2.sock 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2384249 ']' 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.177 07:07:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.177 [2024-11-20 07:07:08.437127] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:05.177 [2024-11-20 07:07:08.437220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384249 ] 00:05:05.177 [2024-11-20 07:07:08.542055] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2384118 has claimed it. 00:05:05.177 [2024-11-20 07:07:08.542126] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2384249) - No such process 00:05:05.742 ERROR: process (pid: 2384249) is no longer running 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2384118 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2384118 ']' 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2384118 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.742 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2384118 00:05:05.999 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.999 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.999 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2384118' 00:05:05.999 killing process with pid 2384118 00:05:05.999 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2384118 00:05:05.999 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2384118 00:05:06.256 00:05:06.256 real 0m1.706s 00:05:06.256 user 0m4.727s 00:05:06.256 sys 0m0.477s 00:05:06.256 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.256 07:07:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.256 ************************************ 00:05:06.256 END TEST locking_overlapped_coremask 00:05:06.256 ************************************ 00:05:06.256 07:07:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:06.256 07:07:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.256 07:07:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.257 07:07:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.257 ************************************ 00:05:06.257 START TEST locking_overlapped_coremask_via_rpc 00:05:06.257 ************************************ 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2384411 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2384411 /var/tmp/spdk.sock 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2384411 ']' 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.257 07:07:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.515 [2024-11-20 07:07:09.725016] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:06.515 [2024-11-20 07:07:09.725110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384411 ] 00:05:06.515 [2024-11-20 07:07:09.791435] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.515 [2024-11-20 07:07:09.791474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.515 [2024-11-20 07:07:09.853916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.515 [2024-11-20 07:07:09.853980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.515 [2024-11-20 07:07:09.853984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2384421 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2384421 /var/tmp/spdk2.sock 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2384421 ']' 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.773 07:07:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:06.773 [2024-11-20 07:07:10.201726] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:06.773 [2024-11-20 07:07:10.201809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384421 ] 00:05:07.030 [2024-11-20 07:07:10.312731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.030 [2024-11-20 07:07:10.312766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.030 [2024-11-20 07:07:10.439812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.031 [2024-11-20 07:07:10.443394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:07.031 [2024-11-20 07:07:10.443397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.962 [2024-11-20 07:07:11.232397] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2384411 has claimed it. 00:05:07.962 request: 00:05:07.962 { 00:05:07.962 "method": "framework_enable_cpumask_locks", 00:05:07.962 "req_id": 1 00:05:07.962 } 00:05:07.962 Got JSON-RPC error response 00:05:07.962 response: 00:05:07.962 { 00:05:07.962 "code": -32603, 00:05:07.962 "message": "Failed to claim CPU core: 2" 00:05:07.962 } 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2384411 /var/tmp/spdk.sock 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2384411 ']' 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.962 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2384421 /var/tmp/spdk2.sock 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2384421 ']' 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.219 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.220 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:08.476 00:05:08.476 real 0m2.127s 00:05:08.476 user 0m1.198s 00:05:08.476 sys 0m0.178s 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.476 07:07:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.476 ************************************ 00:05:08.476 END TEST locking_overlapped_coremask_via_rpc 00:05:08.476 ************************************ 00:05:08.476 07:07:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:08.476 07:07:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2384411 ]] 00:05:08.476 07:07:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2384411 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2384411 ']' 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2384411 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2384411 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2384411' 00:05:08.476 killing process with pid 2384411 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2384411 00:05:08.476 07:07:11 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2384411 00:05:09.043 07:07:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2384421 ]] 00:05:09.043 07:07:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2384421 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2384421 ']' 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2384421 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2384421 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2384421' 00:05:09.043 killing process with pid 2384421 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2384421 00:05:09.043 07:07:12 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2384421 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2384411 ]] 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2384411 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2384411 ']' 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2384411 00:05:09.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2384411) - No such process 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2384411 is not found' 00:05:09.610 Process with pid 2384411 is not found 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2384421 ]] 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2384421 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2384421 ']' 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2384421 00:05:09.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2384421) - No such process 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2384421 is not found' 00:05:09.610 Process with pid 2384421 is not found 00:05:09.610 07:07:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:09.610 00:05:09.610 real 0m15.934s 00:05:09.610 user 0m29.195s 00:05:09.610 sys 0m5.304s 00:05:09.610 07:07:12 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.611 07:07:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.611 ************************************ 00:05:09.611 END TEST cpu_locks 00:05:09.611 ************************************ 00:05:09.611 00:05:09.611 real 0m40.644s 00:05:09.611 user 1m19.937s 00:05:09.611 sys 0m9.311s 00:05:09.611 07:07:12 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.611 07:07:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.611 ************************************ 00:05:09.611 END TEST event 00:05:09.611 ************************************ 00:05:09.611 07:07:12 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:09.611 07:07:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.611 07:07:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.611 07:07:12 -- common/autotest_common.sh@10 -- # set +x 00:05:09.611 ************************************ 00:05:09.611 START TEST thread 00:05:09.611 ************************************ 00:05:09.611 07:07:12 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:09.611 * Looking for test storage... 00:05:09.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:09.611 07:07:12 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:09.611 07:07:12 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:09.611 07:07:12 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:09.611 07:07:12 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:09.611 07:07:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.611 07:07:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.611 07:07:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.611 07:07:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.611 07:07:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.611 07:07:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.611 07:07:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.611 07:07:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.611 07:07:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.611 07:07:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.611 07:07:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.611 07:07:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:09.611 07:07:12 thread -- scripts/common.sh@345 -- # : 1 00:05:09.611 07:07:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.611 07:07:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.611 07:07:12 thread -- scripts/common.sh@365 -- # decimal 1 00:05:09.611 07:07:12 thread -- scripts/common.sh@353 -- # local d=1 00:05:09.611 07:07:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.611 07:07:12 thread -- scripts/common.sh@355 -- # echo 1 00:05:09.611 07:07:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.611 07:07:13 thread -- scripts/common.sh@366 -- # decimal 2 00:05:09.611 07:07:13 thread -- scripts/common.sh@353 -- # local d=2 00:05:09.611 07:07:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.611 07:07:13 thread -- scripts/common.sh@355 -- # echo 2 00:05:09.611 07:07:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.611 07:07:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.611 07:07:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.611 07:07:13 thread -- scripts/common.sh@368 -- # return 0 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:09.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.611 --rc genhtml_branch_coverage=1 00:05:09.611 --rc genhtml_function_coverage=1 00:05:09.611 --rc genhtml_legend=1 00:05:09.611 --rc geninfo_all_blocks=1 00:05:09.611 --rc geninfo_unexecuted_blocks=1 00:05:09.611 00:05:09.611 ' 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:09.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.611 --rc genhtml_branch_coverage=1 00:05:09.611 --rc genhtml_function_coverage=1 00:05:09.611 --rc genhtml_legend=1 00:05:09.611 --rc geninfo_all_blocks=1 00:05:09.611 --rc geninfo_unexecuted_blocks=1 00:05:09.611 00:05:09.611 ' 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:09.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.611 --rc genhtml_branch_coverage=1 00:05:09.611 --rc genhtml_function_coverage=1 00:05:09.611 --rc genhtml_legend=1 00:05:09.611 --rc geninfo_all_blocks=1 00:05:09.611 --rc geninfo_unexecuted_blocks=1 00:05:09.611 00:05:09.611 ' 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:09.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.611 --rc genhtml_branch_coverage=1 00:05:09.611 --rc genhtml_function_coverage=1 00:05:09.611 --rc genhtml_legend=1 00:05:09.611 --rc geninfo_all_blocks=1 00:05:09.611 --rc geninfo_unexecuted_blocks=1 00:05:09.611 00:05:09.611 ' 00:05:09.611 07:07:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.611 07:07:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.611 ************************************ 00:05:09.611 START TEST thread_poller_perf 00:05:09.611 ************************************ 00:05:09.611 07:07:13 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:09.868 [2024-11-20 07:07:13.048267] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:09.868 [2024-11-20 07:07:13.048341] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384920 ] 00:05:09.868 [2024-11-20 07:07:13.111933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.868 [2024-11-20 07:07:13.167364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.868 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:10.805 [2024-11-20T06:07:14.238Z] ====================================== 00:05:10.805 [2024-11-20T06:07:14.238Z] busy:2710950633 (cyc) 00:05:10.805 [2024-11-20T06:07:14.238Z] total_run_count: 367000 00:05:10.805 [2024-11-20T06:07:14.238Z] tsc_hz: 2700000000 (cyc) 00:05:10.805 [2024-11-20T06:07:14.238Z] ====================================== 00:05:10.805 [2024-11-20T06:07:14.238Z] poller_cost: 7386 (cyc), 2735 (nsec) 00:05:10.805 00:05:10.805 real 0m1.203s 00:05:10.805 user 0m1.137s 00:05:10.805 sys 0m0.061s 00:05:11.064 07:07:14 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:11.064 07:07:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 ************************************ 00:05:11.064 END TEST thread_poller_perf 00:05:11.064 ************************************ 00:05:11.064 07:07:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:11.064 07:07:14 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:11.064 07:07:14 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:11.064 07:07:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 ************************************ 00:05:11.064 START TEST thread_poller_perf 00:05:11.064 ************************************ 00:05:11.064 07:07:14 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:11.064 [2024-11-20 07:07:14.300073] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:11.064 [2024-11-20 07:07:14.300141] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385075 ] 00:05:11.064 [2024-11-20 07:07:14.364583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.064 [2024-11-20 07:07:14.424340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.064 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:12.434 [2024-11-20T06:07:15.867Z] ====================================== 00:05:12.434 [2024-11-20T06:07:15.867Z] busy:2702163408 (cyc) 00:05:12.434 [2024-11-20T06:07:15.867Z] total_run_count: 4841000 00:05:12.434 [2024-11-20T06:07:15.867Z] tsc_hz: 2700000000 (cyc) 00:05:12.434 [2024-11-20T06:07:15.867Z] ====================================== 00:05:12.434 [2024-11-20T06:07:15.867Z] poller_cost: 558 (cyc), 206 (nsec) 00:05:12.434 00:05:12.434 real 0m1.202s 00:05:12.434 user 0m1.128s 00:05:12.434 sys 0m0.068s 00:05:12.434 07:07:15 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.434 07:07:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.434 ************************************ 00:05:12.434 END TEST thread_poller_perf 00:05:12.434 ************************************ 00:05:12.434 07:07:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:12.434 00:05:12.434 real 0m2.656s 00:05:12.434 user 0m2.405s 00:05:12.434 sys 0m0.256s 00:05:12.434 07:07:15 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.434 07:07:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.434 ************************************ 00:05:12.434 END TEST thread 00:05:12.434 ************************************ 00:05:12.434 07:07:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:12.434 07:07:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:12.434 07:07:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.434 07:07:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.434 07:07:15 -- common/autotest_common.sh@10 -- # set +x 00:05:12.434 ************************************ 00:05:12.434 START TEST app_cmdline 00:05:12.434 ************************************ 00:05:12.434 07:07:15 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:12.434 * Looking for test storage... 00:05:12.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:12.434 07:07:15 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.434 07:07:15 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.434 07:07:15 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.435 07:07:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.435 --rc genhtml_branch_coverage=1 00:05:12.435 --rc genhtml_function_coverage=1 00:05:12.435 --rc genhtml_legend=1 00:05:12.435 --rc geninfo_all_blocks=1 00:05:12.435 --rc geninfo_unexecuted_blocks=1 00:05:12.435 00:05:12.435 ' 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.435 --rc genhtml_branch_coverage=1 00:05:12.435 --rc genhtml_function_coverage=1 00:05:12.435 --rc genhtml_legend=1 00:05:12.435 --rc geninfo_all_blocks=1 00:05:12.435 --rc geninfo_unexecuted_blocks=1 00:05:12.435 00:05:12.435 ' 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.435 --rc genhtml_branch_coverage=1 00:05:12.435 --rc genhtml_function_coverage=1 00:05:12.435 --rc genhtml_legend=1 00:05:12.435 --rc geninfo_all_blocks=1 00:05:12.435 --rc geninfo_unexecuted_blocks=1 00:05:12.435 00:05:12.435 ' 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.435 --rc genhtml_branch_coverage=1 00:05:12.435 --rc genhtml_function_coverage=1 00:05:12.435 --rc genhtml_legend=1 00:05:12.435 --rc geninfo_all_blocks=1 00:05:12.435 --rc geninfo_unexecuted_blocks=1 00:05:12.435 00:05:12.435 ' 00:05:12.435 07:07:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:12.435 07:07:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2385284 00:05:12.435 07:07:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:12.435 07:07:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2385284 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2385284 ']' 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.435 07:07:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.435 [2024-11-20 07:07:15.768624] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:12.435 [2024-11-20 07:07:15.768730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385284 ] 00:05:12.435 [2024-11-20 07:07:15.839763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.694 [2024-11-20 07:07:15.902190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.952 07:07:16 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.952 07:07:16 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:12.952 07:07:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:13.211 { 00:05:13.211 "version": "SPDK v25.01-pre git sha1 5716007f5", 00:05:13.211 "fields": { 00:05:13.211 "major": 25, 00:05:13.211 "minor": 1, 00:05:13.211 "patch": 0, 00:05:13.211 "suffix": "-pre", 00:05:13.211 "commit": "5716007f5" 00:05:13.211 } 00:05:13.211 } 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:13.211 07:07:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:13.211 07:07:16 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:13.471 request: 00:05:13.471 { 00:05:13.471 "method": "env_dpdk_get_mem_stats", 00:05:13.471 "req_id": 1 00:05:13.471 } 00:05:13.471 Got JSON-RPC error response 00:05:13.471 response: 00:05:13.471 { 00:05:13.471 "code": -32601, 00:05:13.471 "message": "Method not found" 00:05:13.471 } 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.471 07:07:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2385284 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2385284 ']' 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2385284 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2385284 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2385284' 00:05:13.471 killing process with pid 2385284 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@971 -- # kill 2385284 00:05:13.471 07:07:16 app_cmdline -- common/autotest_common.sh@976 -- # wait 2385284 00:05:14.039 00:05:14.039 real 0m1.613s 00:05:14.039 user 0m1.986s 00:05:14.039 sys 0m0.486s 00:05:14.039 07:07:17 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.039 07:07:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:14.039 ************************************ 00:05:14.039 END TEST app_cmdline 00:05:14.039 ************************************ 00:05:14.039 07:07:17 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:14.039 07:07:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.039 07:07:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.039 07:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:14.039 ************************************ 00:05:14.039 START TEST version 00:05:14.039 ************************************ 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:14.039 * Looking for test storage... 00:05:14.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.039 07:07:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.039 07:07:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.039 07:07:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.039 07:07:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.039 07:07:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.039 07:07:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.039 07:07:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.039 07:07:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.039 07:07:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.039 07:07:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.039 07:07:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.039 07:07:17 version -- scripts/common.sh@344 -- # case "$op" in 00:05:14.039 07:07:17 version -- scripts/common.sh@345 -- # : 1 00:05:14.039 07:07:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.039 07:07:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.039 07:07:17 version -- scripts/common.sh@365 -- # decimal 1 00:05:14.039 07:07:17 version -- scripts/common.sh@353 -- # local d=1 00:05:14.039 07:07:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.039 07:07:17 version -- scripts/common.sh@355 -- # echo 1 00:05:14.039 07:07:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.039 07:07:17 version -- scripts/common.sh@366 -- # decimal 2 00:05:14.039 07:07:17 version -- scripts/common.sh@353 -- # local d=2 00:05:14.039 07:07:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.039 07:07:17 version -- scripts/common.sh@355 -- # echo 2 00:05:14.039 07:07:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.039 07:07:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.039 07:07:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.039 07:07:17 version -- scripts/common.sh@368 -- # return 0 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.039 --rc genhtml_branch_coverage=1 00:05:14.039 --rc genhtml_function_coverage=1 00:05:14.039 --rc genhtml_legend=1 00:05:14.039 --rc geninfo_all_blocks=1 00:05:14.039 --rc geninfo_unexecuted_blocks=1 00:05:14.039 00:05:14.039 ' 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.039 --rc genhtml_branch_coverage=1 00:05:14.039 --rc genhtml_function_coverage=1 00:05:14.039 --rc genhtml_legend=1 00:05:14.039 --rc geninfo_all_blocks=1 00:05:14.039 --rc geninfo_unexecuted_blocks=1 00:05:14.039 00:05:14.039 ' 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.039 --rc genhtml_branch_coverage=1 00:05:14.039 --rc genhtml_function_coverage=1 00:05:14.039 --rc genhtml_legend=1 00:05:14.039 --rc geninfo_all_blocks=1 00:05:14.039 --rc geninfo_unexecuted_blocks=1 00:05:14.039 00:05:14.039 ' 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.039 --rc genhtml_branch_coverage=1 00:05:14.039 --rc genhtml_function_coverage=1 00:05:14.039 --rc genhtml_legend=1 00:05:14.039 --rc geninfo_all_blocks=1 00:05:14.039 --rc geninfo_unexecuted_blocks=1 00:05:14.039 00:05:14.039 ' 00:05:14.039 07:07:17 version -- app/version.sh@17 -- # get_header_version major 00:05:14.039 07:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # cut -f2 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.039 07:07:17 version -- app/version.sh@17 -- # major=25 00:05:14.039 07:07:17 version -- app/version.sh@18 -- # get_header_version minor 00:05:14.039 07:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # cut -f2 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.039 07:07:17 version -- app/version.sh@18 -- # minor=1 00:05:14.039 07:07:17 version -- app/version.sh@19 -- # get_header_version patch 00:05:14.039 07:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # cut -f2 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.039 07:07:17 version -- app/version.sh@19 -- # patch=0 00:05:14.039 07:07:17 version -- app/version.sh@20 -- # get_header_version suffix 00:05:14.039 07:07:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # cut -f2 00:05:14.039 07:07:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:14.039 07:07:17 version -- app/version.sh@20 -- # suffix=-pre 00:05:14.039 07:07:17 version -- app/version.sh@22 -- # version=25.1 00:05:14.039 07:07:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:14.039 07:07:17 version -- app/version.sh@28 -- # version=25.1rc0 00:05:14.039 07:07:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:14.039 07:07:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:14.039 07:07:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:14.039 07:07:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:14.039 00:05:14.039 real 0m0.203s 00:05:14.039 user 0m0.140s 00:05:14.039 sys 0m0.089s 00:05:14.039 07:07:17 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.039 07:07:17 version -- common/autotest_common.sh@10 -- # set +x 00:05:14.039 ************************************ 00:05:14.039 END TEST version 00:05:14.039 ************************************ 00:05:14.039 07:07:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:14.039 07:07:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:14.039 07:07:17 -- spdk/autotest.sh@194 -- # uname -s 00:05:14.039 07:07:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:14.039 07:07:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:14.039 07:07:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:14.039 07:07:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:14.039 07:07:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:14.039 07:07:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:14.039 07:07:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.040 07:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 07:07:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:14.298 07:07:17 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:14.298 07:07:17 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:14.298 07:07:17 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:14.298 07:07:17 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:14.298 07:07:17 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:14.298 07:07:17 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:14.298 07:07:17 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:14.298 07:07:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.298 07:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:14.298 ************************************ 00:05:14.298 START TEST nvmf_tcp 00:05:14.298 ************************************ 00:05:14.298 07:07:17 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:14.298 * Looking for test storage... 00:05:14.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:14.298 07:07:17 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.298 07:07:17 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.298 07:07:17 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.298 07:07:17 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.298 07:07:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.299 07:07:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.299 --rc genhtml_branch_coverage=1 00:05:14.299 --rc genhtml_function_coverage=1 00:05:14.299 --rc genhtml_legend=1 00:05:14.299 --rc geninfo_all_blocks=1 00:05:14.299 --rc geninfo_unexecuted_blocks=1 00:05:14.299 00:05:14.299 ' 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.299 --rc genhtml_branch_coverage=1 00:05:14.299 --rc genhtml_function_coverage=1 00:05:14.299 --rc genhtml_legend=1 00:05:14.299 --rc geninfo_all_blocks=1 00:05:14.299 --rc geninfo_unexecuted_blocks=1 00:05:14.299 00:05:14.299 ' 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.299 --rc genhtml_branch_coverage=1 00:05:14.299 --rc genhtml_function_coverage=1 00:05:14.299 --rc genhtml_legend=1 00:05:14.299 --rc geninfo_all_blocks=1 00:05:14.299 --rc geninfo_unexecuted_blocks=1 00:05:14.299 00:05:14.299 ' 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.299 --rc genhtml_branch_coverage=1 00:05:14.299 --rc genhtml_function_coverage=1 00:05:14.299 --rc genhtml_legend=1 00:05:14.299 --rc geninfo_all_blocks=1 00:05:14.299 --rc geninfo_unexecuted_blocks=1 00:05:14.299 00:05:14.299 ' 00:05:14.299 07:07:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:14.299 07:07:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:14.299 07:07:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.299 07:07:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.299 ************************************ 00:05:14.299 START TEST nvmf_target_core 00:05:14.299 ************************************ 00:05:14.299 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:14.299 * Looking for test storage... 00:05:14.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.558 --rc genhtml_function_coverage=1 00:05:14.558 --rc genhtml_legend=1 00:05:14.558 --rc geninfo_all_blocks=1 00:05:14.558 --rc geninfo_unexecuted_blocks=1 00:05:14.558 00:05:14.558 ' 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.558 --rc genhtml_function_coverage=1 00:05:14.558 --rc genhtml_legend=1 00:05:14.558 --rc geninfo_all_blocks=1 00:05:14.558 --rc geninfo_unexecuted_blocks=1 00:05:14.558 00:05:14.558 ' 00:05:14.558 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.558 --rc genhtml_branch_coverage=1 00:05:14.558 --rc genhtml_function_coverage=1 00:05:14.558 --rc genhtml_legend=1 00:05:14.558 --rc geninfo_all_blocks=1 00:05:14.558 --rc geninfo_unexecuted_blocks=1 00:05:14.559 00:05:14.559 ' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.559 --rc genhtml_branch_coverage=1 00:05:14.559 --rc genhtml_function_coverage=1 00:05:14.559 --rc genhtml_legend=1 00:05:14.559 --rc geninfo_all_blocks=1 00:05:14.559 --rc geninfo_unexecuted_blocks=1 00:05:14.559 00:05:14.559 ' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:14.559 ************************************ 00:05:14.559 START TEST nvmf_abort 00:05:14.559 ************************************ 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:14.559 * Looking for test storage... 00:05:14.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.559 07:07:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.818 --rc genhtml_branch_coverage=1 00:05:14.818 --rc genhtml_function_coverage=1 00:05:14.818 --rc genhtml_legend=1 00:05:14.818 --rc geninfo_all_blocks=1 00:05:14.818 --rc geninfo_unexecuted_blocks=1 00:05:14.818 00:05:14.818 ' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.818 --rc genhtml_branch_coverage=1 00:05:14.818 --rc genhtml_function_coverage=1 00:05:14.818 --rc genhtml_legend=1 00:05:14.818 --rc geninfo_all_blocks=1 00:05:14.818 --rc geninfo_unexecuted_blocks=1 00:05:14.818 00:05:14.818 ' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.818 --rc genhtml_branch_coverage=1 00:05:14.818 --rc genhtml_function_coverage=1 00:05:14.818 --rc genhtml_legend=1 00:05:14.818 --rc geninfo_all_blocks=1 00:05:14.818 --rc geninfo_unexecuted_blocks=1 00:05:14.818 00:05:14.818 ' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.818 --rc genhtml_branch_coverage=1 00:05:14.818 --rc genhtml_function_coverage=1 00:05:14.818 --rc genhtml_legend=1 00:05:14.818 --rc geninfo_all_blocks=1 00:05:14.818 --rc geninfo_unexecuted_blocks=1 00:05:14.818 00:05:14.818 ' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.818 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:14.819 07:07:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:17.350 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:17.350 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:17.350 Found net devices under 0000:09:00.0: cvl_0_0 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:17.350 Found net devices under 0000:09:00.1: cvl_0_1 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:17.350 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:17.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:17.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:05:17.350 00:05:17.351 --- 10.0.0.2 ping statistics --- 00:05:17.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:17.351 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:17.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:17.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:05:17.351 00:05:17.351 --- 10.0.0.1 ping statistics --- 00:05:17.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:17.351 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2387487 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2387487 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2387487 ']' 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 [2024-11-20 07:07:20.444199] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:17.351 [2024-11-20 07:07:20.444280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:17.351 [2024-11-20 07:07:20.518744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.351 [2024-11-20 07:07:20.579104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:17.351 [2024-11-20 07:07:20.579174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:17.351 [2024-11-20 07:07:20.579189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.351 [2024-11-20 07:07:20.579201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.351 [2024-11-20 07:07:20.579211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:17.351 [2024-11-20 07:07:20.580730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.351 [2024-11-20 07:07:20.580794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.351 [2024-11-20 07:07:20.580797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 [2024-11-20 07:07:20.730776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.351 Malloc0 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.351 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.610 Delay0 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.610 [2024-11-20 07:07:20.801892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.610 07:07:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:17.610 [2024-11-20 07:07:20.876308] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:19.511 Initializing NVMe Controllers 00:05:19.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:19.511 controller IO queue size 128 less than required 00:05:19.511 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:19.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:19.511 Initialization complete. Launching workers. 00:05:19.512 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28683 00:05:19.512 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28744, failed to submit 62 00:05:19.512 success 28687, unsuccessful 57, failed 0 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:19.512 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:19.512 rmmod nvme_tcp 00:05:19.772 rmmod nvme_fabrics 00:05:19.772 rmmod nvme_keyring 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2387487 ']' 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2387487 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2387487 ']' 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2387487 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.772 07:07:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2387487 00:05:19.772 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:19.772 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:19.772 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2387487' 00:05:19.772 killing process with pid 2387487 00:05:19.772 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2387487 00:05:19.772 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2387487 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:20.032 07:07:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:21.935 00:05:21.935 real 0m7.453s 00:05:21.935 user 0m10.464s 00:05:21.935 sys 0m2.604s 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:21.935 ************************************ 00:05:21.935 END TEST nvmf_abort 00:05:21.935 ************************************ 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.935 07:07:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:22.194 ************************************ 00:05:22.194 START TEST nvmf_ns_hotplug_stress 00:05:22.194 ************************************ 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:22.194 * Looking for test storage... 00:05:22.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.194 --rc genhtml_function_coverage=1 00:05:22.194 --rc genhtml_legend=1 00:05:22.194 --rc geninfo_all_blocks=1 00:05:22.194 --rc geninfo_unexecuted_blocks=1 00:05:22.194 00:05:22.194 ' 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.194 --rc genhtml_function_coverage=1 00:05:22.194 --rc genhtml_legend=1 00:05:22.194 --rc geninfo_all_blocks=1 00:05:22.194 --rc geninfo_unexecuted_blocks=1 00:05:22.194 00:05:22.194 ' 00:05:22.194 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.195 --rc genhtml_function_coverage=1 00:05:22.195 --rc genhtml_legend=1 00:05:22.195 --rc geninfo_all_blocks=1 00:05:22.195 --rc geninfo_unexecuted_blocks=1 00:05:22.195 00:05:22.195 ' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:22.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.195 --rc genhtml_branch_coverage=1 00:05:22.195 --rc genhtml_function_coverage=1 00:05:22.195 --rc genhtml_legend=1 00:05:22.195 --rc geninfo_all_blocks=1 00:05:22.195 --rc geninfo_unexecuted_blocks=1 00:05:22.195 00:05:22.195 ' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:22.195 07:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:24.768 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:24.768 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:24.768 Found net devices under 0000:09:00.0: cvl_0_0 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:24.768 Found net devices under 0000:09:00.1: cvl_0_1 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:24.768 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:24.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:05:24.769 00:05:24.769 --- 10.0.0.2 ping statistics --- 00:05:24.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.769 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:24.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:05:24.769 00:05:24.769 --- 10.0.0.1 ping statistics --- 00:05:24.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.769 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2389736 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2389736 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2389736 ']' 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.769 07:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:24.769 [2024-11-20 07:07:27.934266] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:05:24.769 [2024-11-20 07:07:27.934385] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.769 [2024-11-20 07:07:28.005587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.769 [2024-11-20 07:07:28.059616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.769 [2024-11-20 07:07:28.059671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.769 [2024-11-20 07:07:28.059698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.769 [2024-11-20 07:07:28.059709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.769 [2024-11-20 07:07:28.059718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.769 [2024-11-20 07:07:28.061192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.769 [2024-11-20 07:07:28.061256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.769 [2024-11-20 07:07:28.061259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.769 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.769 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:24.769 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:24.769 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:24.769 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:25.026 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:25.027 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:25.027 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:25.284 [2024-11-20 07:07:28.470535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.284 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:25.542 07:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:25.800 [2024-11-20 07:07:29.005706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:25.800 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.058 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:26.316 Malloc0 00:05:26.316 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:26.575 Delay0 00:05:26.575 07:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.833 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:27.090 NULL1 00:05:27.091 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:27.348 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2390161 00:05:27.348 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:27.348 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:27.348 07:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.719 Read completed with error (sct=0, sc=11) 00:05:28.719 07:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.977 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:28.977 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:28.977 true 00:05:29.235 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:29.235 07:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.801 07:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.058 07:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:30.058 07:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:30.316 true 00:05:30.316 07:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:30.316 07:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.574 07:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.832 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:30.832 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:31.089 true 00:05:31.089 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:31.089 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.656 07:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.656 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:31.656 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:31.914 true 00:05:31.914 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:31.914 07:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.288 07:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.288 07:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:33.288 07:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:33.546 true 00:05:33.546 07:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:33.546 07:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.804 07:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.061 07:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:34.061 07:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:34.319 true 00:05:34.319 07:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:34.319 07:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.252 07:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.509 07:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:35.509 07:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:35.768 true 00:05:35.768 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:35.768 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.025 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.283 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:36.283 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:36.540 true 00:05:36.540 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:36.540 07:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.798 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.056 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:37.056 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:37.315 true 00:05:37.572 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:37.572 07:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.506 07:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.764 07:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:38.764 07:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:39.022 true 00:05:39.022 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:39.022 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.280 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.537 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:39.538 07:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:39.795 true 00:05:39.795 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:39.795 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.054 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.311 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:40.311 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:40.569 true 00:05:40.569 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:40.569 07:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.503 07:07:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.760 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:41.760 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:42.017 true 00:05:42.017 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:42.017 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.275 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.533 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:42.533 07:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:42.791 true 00:05:42.791 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:42.791 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.048 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.305 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:43.306 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:43.563 true 00:05:43.820 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:43.820 07:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.754 07:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.012 07:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:45.012 07:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:45.270 true 00:05:45.270 07:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:45.270 07:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.527 07:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.784 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:45.784 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:46.041 true 00:05:46.041 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:46.041 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.298 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.556 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:46.556 07:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:46.814 true 00:05:46.814 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:46.814 07:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.747 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.005 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:48.005 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:48.264 true 00:05:48.264 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:48.264 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.559 07:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.841 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:48.841 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:49.098 true 00:05:49.098 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:49.098 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.355 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.612 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:49.612 07:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:49.869 true 00:05:49.869 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:49.869 07:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.800 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.800 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.058 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:51.058 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:51.316 true 00:05:51.316 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:51.316 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.574 07:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.832 07:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:51.833 07:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:52.091 true 00:05:52.091 07:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:52.091 07:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.026 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.284 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:53.284 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:53.542 true 00:05:53.542 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:53.542 07:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.800 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.057 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:54.057 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:54.316 true 00:05:54.316 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:54.316 07:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.251 07:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.509 07:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:55.509 07:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:55.767 true 00:05:55.767 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:55.767 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.025 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.283 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:56.283 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:56.541 true 00:05:56.541 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:56.541 07:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.478 07:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.737 Initializing NVMe Controllers 00:05:57.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.737 Controller IO queue size 128, less than required. 00:05:57.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.737 Controller IO queue size 128, less than required. 00:05:57.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:57.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:57.737 Initialization complete. Launching workers. 00:05:57.737 ======================================================== 00:05:57.737 Latency(us) 00:05:57.737 Device Information : IOPS MiB/s Average min max 00:05:57.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 720.02 0.35 80041.57 3429.81 1016384.50 00:05:57.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9213.41 4.50 13893.18 3254.00 538261.00 00:05:57.737 ======================================================== 00:05:57.737 Total : 9933.43 4.85 18687.90 3254.00 1016384.50 00:05:57.737 00:05:57.737 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:57.737 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:57.996 true 00:05:57.996 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2390161 00:05:57.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2390161) - No such process 00:05:57.996 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2390161 00:05:57.996 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.254 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.513 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:58.513 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:58.513 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:58.513 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.513 07:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:58.771 null0 00:05:58.771 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.771 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.771 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:59.029 null1 00:05:59.029 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.029 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.029 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:59.287 null2 00:05:59.287 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.287 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.287 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:59.545 null3 00:05:59.545 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.545 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.545 07:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:59.804 null4 00:05:59.804 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.804 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.804 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:00.062 null5 00:06:00.062 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.062 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.062 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:00.322 null6 00:06:00.322 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.322 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.322 07:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:00.890 null7 00:06:00.890 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.890 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.890 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:00.890 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.890 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.890 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2394217 2394218 2394220 2394222 2394224 2394226 2394228 2394230 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.891 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.150 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.409 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.668 07:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.927 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.186 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.445 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.704 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.704 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.704 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.704 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.704 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.704 07:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.962 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.962 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.963 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.963 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.963 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.963 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.963 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.963 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.220 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.478 07:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.736 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.737 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.737 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.737 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.737 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.737 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.737 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.995 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.253 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.819 07:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.077 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.335 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.336 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.593 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.594 07:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.851 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.109 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.367 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.367 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.367 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.625 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.625 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.625 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.625 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.625 07:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:06.884 rmmod nvme_tcp 00:06:06.884 rmmod nvme_fabrics 00:06:06.884 rmmod nvme_keyring 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2389736 ']' 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2389736 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2389736 ']' 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2389736 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2389736 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2389736' 00:06:06.884 killing process with pid 2389736 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2389736 00:06:06.884 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2389736 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.143 07:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:09.679 00:06:09.679 real 0m47.143s 00:06:09.679 user 3m39.055s 00:06:09.679 sys 0m15.953s 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.679 ************************************ 00:06:09.679 END TEST nvmf_ns_hotplug_stress 00:06:09.679 ************************************ 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.679 ************************************ 00:06:09.679 START TEST nvmf_delete_subsystem 00:06:09.679 ************************************ 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:09.679 * Looking for test storage... 00:06:09.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.679 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.680 --rc genhtml_branch_coverage=1 00:06:09.680 --rc genhtml_function_coverage=1 00:06:09.680 --rc genhtml_legend=1 00:06:09.680 --rc geninfo_all_blocks=1 00:06:09.680 --rc geninfo_unexecuted_blocks=1 00:06:09.680 00:06:09.680 ' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.680 --rc genhtml_branch_coverage=1 00:06:09.680 --rc genhtml_function_coverage=1 00:06:09.680 --rc genhtml_legend=1 00:06:09.680 --rc geninfo_all_blocks=1 00:06:09.680 --rc geninfo_unexecuted_blocks=1 00:06:09.680 00:06:09.680 ' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.680 --rc genhtml_branch_coverage=1 00:06:09.680 --rc genhtml_function_coverage=1 00:06:09.680 --rc genhtml_legend=1 00:06:09.680 --rc geninfo_all_blocks=1 00:06:09.680 --rc geninfo_unexecuted_blocks=1 00:06:09.680 00:06:09.680 ' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.680 --rc genhtml_branch_coverage=1 00:06:09.680 --rc genhtml_function_coverage=1 00:06:09.680 --rc genhtml_legend=1 00:06:09.680 --rc geninfo_all_blocks=1 00:06:09.680 --rc geninfo_unexecuted_blocks=1 00:06:09.680 00:06:09.680 ' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:09.680 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.681 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.681 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.681 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:09.681 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:09.681 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:09.681 07:08:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.582 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:11.583 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:11.583 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:11.583 Found net devices under 0000:09:00.0: cvl_0_0 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:11.583 Found net devices under 0000:09:00.1: cvl_0_1 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.583 07:08:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.583 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.583 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:11.583 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:11.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:06:11.842 00:06:11.842 --- 10.0.0.2 ping statistics --- 00:06:11.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.842 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:06:11.842 00:06:11.842 --- 10.0.0.1 ping statistics --- 00:06:11.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.842 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2397010 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2397010 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2397010 ']' 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.842 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.843 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.843 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.843 [2024-11-20 07:08:15.150204] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:11.843 [2024-11-20 07:08:15.150297] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.843 [2024-11-20 07:08:15.224483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.101 [2024-11-20 07:08:15.284453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.101 [2024-11-20 07:08:15.284498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.101 [2024-11-20 07:08:15.284527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.101 [2024-11-20 07:08:15.284538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.101 [2024-11-20 07:08:15.284548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.101 [2024-11-20 07:08:15.285978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.101 [2024-11-20 07:08:15.285983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.101 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.101 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:12.101 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 [2024-11-20 07:08:15.453812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 [2024-11-20 07:08:15.470023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 NULL1 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 Delay0 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2397152 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:12.102 07:08:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:12.360 [2024-11-20 07:08:15.554845] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:14.363 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:14.363 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.363 07:08:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 [2024-11-20 07:08:17.765713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ec860 is same with the state(6) to be set 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 starting I/O failed: -6 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.363 Write completed with error (sct=0, sc=8) 00:06:14.363 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 starting I/O failed: -6 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 [2024-11-20 07:08:17.766443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe210000c40 is same with the state(6) to be set 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Write completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:14.364 Read completed with error (sct=0, sc=8) 00:06:15.732 [2024-11-20 07:08:18.732728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ed9a0 is same with the state(6) to be set 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 [2024-11-20 07:08:18.766395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe21000d800 is same with the state(6) to be set 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Read completed with error (sct=0, sc=8) 00:06:15.732 Write completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 [2024-11-20 07:08:18.766818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe21000d020 is same with the state(6) to be set 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 [2024-11-20 07:08:18.767467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ec680 is same with the state(6) to be set 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 Write completed with error (sct=0, sc=8) 00:06:15.733 Read completed with error (sct=0, sc=8) 00:06:15.733 [2024-11-20 07:08:18.769607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ec2c0 is same with the state(6) to be set 00:06:15.733 Initializing NVMe Controllers 00:06:15.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.733 Controller IO queue size 128, less than required. 00:06:15.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:15.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:15.733 Initialization complete. Launching workers. 00:06:15.733 ======================================================== 00:06:15.733 Latency(us) 00:06:15.733 Device Information : IOPS MiB/s Average min max 00:06:15.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.26 0.08 904229.00 674.79 1012412.82 00:06:15.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.81 0.08 996355.95 360.64 2001899.80 00:06:15.733 ======================================================== 00:06:15.733 Total : 326.08 0.16 949381.02 360.64 2001899.80 00:06:15.733 00:06:15.733 [2024-11-20 07:08:18.770176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ed9a0 (9): Bad file descriptor 00:06:15.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:15.733 07:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.733 07:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:15.733 07:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2397152 00:06:15.733 07:08:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2397152 00:06:15.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2397152) - No such process 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2397152 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2397152 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2397152 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.989 [2024-11-20 07:08:19.294682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2397568 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:15.989 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.989 [2024-11-20 07:08:19.368638] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:16.553 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.553 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:16.553 07:08:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.117 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.117 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:17.117 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.683 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.683 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:17.683 07:08:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.940 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.941 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:17.941 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.505 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.505 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:18.505 07:08:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.070 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.070 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:19.070 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.328 Initializing NVMe Controllers 00:06:19.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:19.328 Controller IO queue size 128, less than required. 00:06:19.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:19.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:19.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:19.328 Initialization complete. Launching workers. 00:06:19.328 ======================================================== 00:06:19.328 Latency(us) 00:06:19.328 Device Information : IOPS MiB/s Average min max 00:06:19.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004435.70 1000165.88 1041954.36 00:06:19.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004957.34 1000244.93 1042410.34 00:06:19.328 ======================================================== 00:06:19.328 Total : 256.00 0.12 1004696.52 1000165.88 1042410.34 00:06:19.328 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2397568 00:06:19.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2397568) - No such process 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2397568 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:19.586 rmmod nvme_tcp 00:06:19.586 rmmod nvme_fabrics 00:06:19.586 rmmod nvme_keyring 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2397010 ']' 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2397010 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2397010 ']' 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2397010 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2397010 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2397010' 00:06:19.586 killing process with pid 2397010 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2397010 00:06:19.586 07:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2397010 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.844 07:08:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.383 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:22.383 00:06:22.383 real 0m12.621s 00:06:22.383 user 0m28.301s 00:06:22.383 sys 0m3.103s 00:06:22.383 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.383 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.383 ************************************ 00:06:22.383 END TEST nvmf_delete_subsystem 00:06:22.383 ************************************ 00:06:22.383 07:08:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:22.383 07:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:22.384 ************************************ 00:06:22.384 START TEST nvmf_host_management 00:06:22.384 ************************************ 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:22.384 * Looking for test storage... 00:06:22.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.384 --rc genhtml_branch_coverage=1 00:06:22.384 --rc genhtml_function_coverage=1 00:06:22.384 --rc genhtml_legend=1 00:06:22.384 --rc geninfo_all_blocks=1 00:06:22.384 --rc geninfo_unexecuted_blocks=1 00:06:22.384 00:06:22.384 ' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.384 --rc genhtml_branch_coverage=1 00:06:22.384 --rc genhtml_function_coverage=1 00:06:22.384 --rc genhtml_legend=1 00:06:22.384 --rc geninfo_all_blocks=1 00:06:22.384 --rc geninfo_unexecuted_blocks=1 00:06:22.384 00:06:22.384 ' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.384 --rc genhtml_branch_coverage=1 00:06:22.384 --rc genhtml_function_coverage=1 00:06:22.384 --rc genhtml_legend=1 00:06:22.384 --rc geninfo_all_blocks=1 00:06:22.384 --rc geninfo_unexecuted_blocks=1 00:06:22.384 00:06:22.384 ' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.384 --rc genhtml_branch_coverage=1 00:06:22.384 --rc genhtml_function_coverage=1 00:06:22.384 --rc genhtml_legend=1 00:06:22.384 --rc geninfo_all_blocks=1 00:06:22.384 --rc geninfo_unexecuted_blocks=1 00:06:22.384 00:06:22.384 ' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:22.384 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:22.385 07:08:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.287 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:24.288 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:24.288 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.288 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:24.289 Found net devices under 0000:09:00.0: cvl_0_0 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:24.289 Found net devices under 0000:09:00.1: cvl_0_1 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.289 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:06:24.290 00:06:24.290 --- 10.0.0.2 ping statistics --- 00:06:24.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.290 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:06:24.290 00:06:24.290 --- 10.0.0.1 ping statistics --- 00:06:24.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.290 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2399925 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2399925 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2399925 ']' 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.290 07:08:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.549 [2024-11-20 07:08:27.748005] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:24.549 [2024-11-20 07:08:27.748094] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.549 [2024-11-20 07:08:27.821071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.549 [2024-11-20 07:08:27.881282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.550 [2024-11-20 07:08:27.881339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.550 [2024-11-20 07:08:27.881379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.550 [2024-11-20 07:08:27.881392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.550 [2024-11-20 07:08:27.881402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.550 [2024-11-20 07:08:27.882954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.550 [2024-11-20 07:08:27.883018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.550 [2024-11-20 07:08:27.883085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.550 [2024-11-20 07:08:27.883088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.808 [2024-11-20 07:08:28.042538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.808 Malloc0 00:06:24.808 [2024-11-20 07:08:28.121679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2400081 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2400081 /var/tmp/bdevperf.sock 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2400081 ']' 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:24.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:24.808 { 00:06:24.808 "params": { 00:06:24.808 "name": "Nvme$subsystem", 00:06:24.808 "trtype": "$TEST_TRANSPORT", 00:06:24.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:24.808 "adrfam": "ipv4", 00:06:24.808 "trsvcid": "$NVMF_PORT", 00:06:24.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:24.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:24.808 "hdgst": ${hdgst:-false}, 00:06:24.808 "ddgst": ${ddgst:-false} 00:06:24.808 }, 00:06:24.808 "method": "bdev_nvme_attach_controller" 00:06:24.808 } 00:06:24.808 EOF 00:06:24.808 )") 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:24.808 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:24.808 "params": { 00:06:24.808 "name": "Nvme0", 00:06:24.808 "trtype": "tcp", 00:06:24.808 "traddr": "10.0.0.2", 00:06:24.808 "adrfam": "ipv4", 00:06:24.808 "trsvcid": "4420", 00:06:24.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:24.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:24.808 "hdgst": false, 00:06:24.808 "ddgst": false 00:06:24.808 }, 00:06:24.808 "method": "bdev_nvme_attach_controller" 00:06:24.808 }' 00:06:24.808 [2024-11-20 07:08:28.206097] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:24.808 [2024-11-20 07:08:28.206172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400081 ] 00:06:25.066 [2024-11-20 07:08:28.275055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.066 [2024-11-20 07:08:28.335836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.324 Running I/O for 10 seconds... 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:25.324 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=539 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 539 -ge 100 ']' 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.585 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.585 [2024-11-20 07:08:28.924299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.585 [2024-11-20 07:08:28.924815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.586 [2024-11-20 07:08:28.924827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.586 [2024-11-20 07:08:28.924839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.586 [2024-11-20 07:08:28.924851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.586 [2024-11-20 07:08:28.924863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.586 [2024-11-20 07:08:28.924880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f10 is same with the state(6) to be set 00:06:25.586 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.586 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.586 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.586 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.586 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.586 07:08:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:25.586 [2024-11-20 07:08:28.941772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.586 [2024-11-20 07:08:28.941818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.941845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.586 [2024-11-20 07:08:28.941867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.941882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.586 [2024-11-20 07:08:28.941896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.941909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.586 [2024-11-20 07:08:28.941922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.941935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bca40 is same with the state(6) to be set 00:06:25.586 [2024-11-20 07:08:28.942030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.586 [2024-11-20 07:08:28.942925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.586 [2024-11-20 07:08:28.942940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.942955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.942971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.942986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.587 [2024-11-20 07:08:28.943831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.587 [2024-11-20 07:08:28.943846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.588 [2024-11-20 07:08:28.943863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.588 [2024-11-20 07:08:28.943880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.588 [2024-11-20 07:08:28.943894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.588 [2024-11-20 07:08:28.943909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.588 [2024-11-20 07:08:28.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.588 [2024-11-20 07:08:28.943939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.588 [2024-11-20 07:08:28.943953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.588 [2024-11-20 07:08:28.943969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.588 [2024-11-20 07:08:28.943983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.588 [2024-11-20 07:08:28.945181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:25.588 task offset: 81664 on job bdev=Nvme0n1 fails 00:06:25.588 00:06:25.588 Latency(us) 00:06:25.588 [2024-11-20T06:08:29.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.588 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:25.588 Job: Nvme0n1 ended in about 0.42 seconds with error 00:06:25.588 Verification LBA range: start 0x0 length 0x400 00:06:25.588 Nvme0n1 : 0.42 1530.73 95.67 153.55 0.00 36944.80 2645.71 35340.89 00:06:25.588 [2024-11-20T06:08:29.021Z] =================================================================================================================== 00:06:25.588 [2024-11-20T06:08:29.021Z] Total : 1530.73 95.67 153.55 0.00 36944.80 2645.71 35340.89 00:06:25.588 [2024-11-20 07:08:28.947084] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.588 [2024-11-20 07:08:28.947127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bca40 (9): Bad file descriptor 00:06:25.588 [2024-11-20 07:08:28.956690] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2400081 00:06:26.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2400081) - No such process 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:26.521 { 00:06:26.521 "params": { 00:06:26.521 "name": "Nvme$subsystem", 00:06:26.521 "trtype": "$TEST_TRANSPORT", 00:06:26.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:26.521 "adrfam": "ipv4", 00:06:26.521 "trsvcid": "$NVMF_PORT", 00:06:26.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:26.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:26.521 "hdgst": ${hdgst:-false}, 00:06:26.521 "ddgst": ${ddgst:-false} 00:06:26.521 }, 00:06:26.521 "method": "bdev_nvme_attach_controller" 00:06:26.521 } 00:06:26.521 EOF 00:06:26.521 )") 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:26.521 07:08:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:26.521 "params": { 00:06:26.521 "name": "Nvme0", 00:06:26.521 "trtype": "tcp", 00:06:26.521 "traddr": "10.0.0.2", 00:06:26.521 "adrfam": "ipv4", 00:06:26.521 "trsvcid": "4420", 00:06:26.521 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:26.521 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:26.521 "hdgst": false, 00:06:26.521 "ddgst": false 00:06:26.521 }, 00:06:26.521 "method": "bdev_nvme_attach_controller" 00:06:26.521 }' 00:06:26.779 [2024-11-20 07:08:29.990904] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:26.779 [2024-11-20 07:08:29.990983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400247 ] 00:06:26.779 [2024-11-20 07:08:30.065228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.779 [2024-11-20 07:08:30.127909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.038 Running I/O for 1 seconds... 00:06:27.986 1633.00 IOPS, 102.06 MiB/s 00:06:27.986 Latency(us) 00:06:27.986 [2024-11-20T06:08:31.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.986 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:27.986 Verification LBA range: start 0x0 length 0x400 00:06:27.986 Nvme0n1 : 1.04 1666.90 104.18 0.00 0.00 37774.33 7233.23 33593.27 00:06:27.986 [2024-11-20T06:08:31.419Z] =================================================================================================================== 00:06:27.986 [2024-11-20T06:08:31.419Z] Total : 1666.90 104.18 0.00 0.00 37774.33 7233.23 33593.27 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.244 rmmod nvme_tcp 00:06:28.244 rmmod nvme_fabrics 00:06:28.244 rmmod nvme_keyring 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2399925 ']' 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2399925 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2399925 ']' 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2399925 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.244 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2399925 00:06:28.502 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:28.502 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:28.502 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2399925' 00:06:28.502 killing process with pid 2399925 00:06:28.502 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2399925 00:06:28.502 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2399925 00:06:28.502 [2024-11-20 07:08:31.912772] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.760 07:08:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.668 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.668 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:30.668 00:06:30.668 real 0m8.755s 00:06:30.668 user 0m19.230s 00:06:30.668 sys 0m2.807s 00:06:30.668 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.668 07:08:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.668 ************************************ 00:06:30.668 END TEST nvmf_host_management 00:06:30.668 ************************************ 00:06:30.668 07:08:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:30.668 07:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:30.668 07:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.668 07:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.668 ************************************ 00:06:30.668 START TEST nvmf_lvol 00:06:30.668 ************************************ 00:06:30.668 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:30.668 * Looking for test storage... 00:06:30.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.668 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.929 --rc genhtml_branch_coverage=1 00:06:30.929 --rc genhtml_function_coverage=1 00:06:30.929 --rc genhtml_legend=1 00:06:30.929 --rc geninfo_all_blocks=1 00:06:30.929 --rc geninfo_unexecuted_blocks=1 00:06:30.929 00:06:30.929 ' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.929 --rc genhtml_branch_coverage=1 00:06:30.929 --rc genhtml_function_coverage=1 00:06:30.929 --rc genhtml_legend=1 00:06:30.929 --rc geninfo_all_blocks=1 00:06:30.929 --rc geninfo_unexecuted_blocks=1 00:06:30.929 00:06:30.929 ' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.929 --rc genhtml_branch_coverage=1 00:06:30.929 --rc genhtml_function_coverage=1 00:06:30.929 --rc genhtml_legend=1 00:06:30.929 --rc geninfo_all_blocks=1 00:06:30.929 --rc geninfo_unexecuted_blocks=1 00:06:30.929 00:06:30.929 ' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.929 --rc genhtml_branch_coverage=1 00:06:30.929 --rc genhtml_function_coverage=1 00:06:30.929 --rc genhtml_legend=1 00:06:30.929 --rc geninfo_all_blocks=1 00:06:30.929 --rc geninfo_unexecuted_blocks=1 00:06:30.929 00:06:30.929 ' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.929 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.930 07:08:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.462 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.462 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.462 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.462 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.462 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:33.463 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:33.463 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:33.463 Found net devices under 0000:09:00.0: cvl_0_0 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:33.463 Found net devices under 0000:09:00.1: cvl_0_1 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:06:33.463 00:06:33.463 --- 10.0.0.2 ping statistics --- 00:06:33.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.463 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:06:33.463 00:06:33.463 --- 10.0.0.1 ping statistics --- 00:06:33.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.463 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.463 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2402458 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2402458 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2402458 ']' 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.464 [2024-11-20 07:08:36.606500] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:33.464 [2024-11-20 07:08:36.606595] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.464 [2024-11-20 07:08:36.676982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.464 [2024-11-20 07:08:36.732353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.464 [2024-11-20 07:08:36.732406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.464 [2024-11-20 07:08:36.732435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.464 [2024-11-20 07:08:36.732447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.464 [2024-11-20 07:08:36.732457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.464 [2024-11-20 07:08:36.733906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.464 [2024-11-20 07:08:36.733965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.464 [2024-11-20 07:08:36.733968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.464 07:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:33.722 [2024-11-20 07:08:37.130733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.979 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:34.237 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:34.237 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:34.495 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:34.495 07:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:34.753 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:35.011 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=13da9156-bcf9-425f-b94e-1a980a8fffac 00:06:35.011 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13da9156-bcf9-425f-b94e-1a980a8fffac lvol 20 00:06:35.268 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d5a21f40-ae2a-4348-981b-1d35248a1d4e 00:06:35.268 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:35.526 07:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5a21f40-ae2a-4348-981b-1d35248a1d4e 00:06:35.783 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:36.040 [2024-11-20 07:08:39.386883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:36.040 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:36.297 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2402884 00:06:36.297 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:36.297 07:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:37.670 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d5a21f40-ae2a-4348-981b-1d35248a1d4e MY_SNAPSHOT 00:06:37.670 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a19c2080-79f7-4906-8340-a88e685b813c 00:06:37.670 07:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d5a21f40-ae2a-4348-981b-1d35248a1d4e 30 00:06:37.928 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a19c2080-79f7-4906-8340-a88e685b813c MY_CLONE 00:06:38.185 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=393f7976-e429-41b9-90ec-94a67cc49f86 00:06:38.185 07:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 393f7976-e429-41b9-90ec-94a67cc49f86 00:06:39.118 07:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2402884 00:06:47.293 Initializing NVMe Controllers 00:06:47.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:47.293 Controller IO queue size 128, less than required. 00:06:47.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:47.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:47.293 Initialization complete. Launching workers. 00:06:47.293 ======================================================== 00:06:47.293 Latency(us) 00:06:47.293 Device Information : IOPS MiB/s Average min max 00:06:47.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10537.60 41.16 12150.57 1121.78 81454.85 00:06:47.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10410.70 40.67 12298.11 2090.50 52760.50 00:06:47.293 ======================================================== 00:06:47.293 Total : 20948.30 81.83 12223.90 1121.78 81454.85 00:06:47.293 00:06:47.293 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:47.293 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5a21f40-ae2a-4348-981b-1d35248a1d4e 00:06:47.293 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13da9156-bcf9-425f-b94e-1a980a8fffac 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.551 07:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.551 rmmod nvme_tcp 00:06:47.551 rmmod nvme_fabrics 00:06:47.809 rmmod nvme_keyring 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2402458 ']' 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2402458 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2402458 ']' 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2402458 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2402458 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2402458' 00:06:47.809 killing process with pid 2402458 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2402458 00:06:47.809 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2402458 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:48.067 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:48.068 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.068 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.068 07:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:49.974 00:06:49.974 real 0m19.331s 00:06:49.974 user 1m5.923s 00:06:49.974 sys 0m5.449s 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.974 ************************************ 00:06:49.974 END TEST nvmf_lvol 00:06:49.974 ************************************ 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.974 07:08:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.233 ************************************ 00:06:50.233 START TEST nvmf_lvs_grow 00:06:50.233 ************************************ 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:50.233 * Looking for test storage... 00:06:50.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.233 --rc genhtml_branch_coverage=1 00:06:50.233 --rc genhtml_function_coverage=1 00:06:50.233 --rc genhtml_legend=1 00:06:50.233 --rc geninfo_all_blocks=1 00:06:50.233 --rc geninfo_unexecuted_blocks=1 00:06:50.233 00:06:50.233 ' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.233 --rc genhtml_branch_coverage=1 00:06:50.233 --rc genhtml_function_coverage=1 00:06:50.233 --rc genhtml_legend=1 00:06:50.233 --rc geninfo_all_blocks=1 00:06:50.233 --rc geninfo_unexecuted_blocks=1 00:06:50.233 00:06:50.233 ' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.233 --rc genhtml_branch_coverage=1 00:06:50.233 --rc genhtml_function_coverage=1 00:06:50.233 --rc genhtml_legend=1 00:06:50.233 --rc geninfo_all_blocks=1 00:06:50.233 --rc geninfo_unexecuted_blocks=1 00:06:50.233 00:06:50.233 ' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.233 --rc genhtml_branch_coverage=1 00:06:50.233 --rc genhtml_function_coverage=1 00:06:50.233 --rc genhtml_legend=1 00:06:50.233 --rc geninfo_all_blocks=1 00:06:50.233 --rc geninfo_unexecuted_blocks=1 00:06:50.233 00:06:50.233 ' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.233 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.234 07:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.764 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:52.765 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:52.765 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:52.765 Found net devices under 0000:09:00.0: cvl_0_0 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:52.765 Found net devices under 0000:09:00.1: cvl_0_1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:06:52.765 00:06:52.765 --- 10.0.0.2 ping statistics --- 00:06:52.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.765 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:06:52.765 00:06:52.765 --- 10.0.0.1 ping statistics --- 00:06:52.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.765 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2406178 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2406178 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2406178 ']' 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.765 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.766 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.766 07:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.766 [2024-11-20 07:08:55.961952] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:52.766 [2024-11-20 07:08:55.962045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.766 [2024-11-20 07:08:56.036406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.766 [2024-11-20 07:08:56.094714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.766 [2024-11-20 07:08:56.094759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.766 [2024-11-20 07:08:56.094794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.766 [2024-11-20 07:08:56.094806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.766 [2024-11-20 07:08:56.094816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.766 [2024-11-20 07:08:56.095454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.023 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:53.281 [2024-11-20 07:08:56.487770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 ************************************ 00:06:53.281 START TEST lvs_grow_clean 00:06:53.281 ************************************ 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:53.281 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:53.282 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:53.282 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:53.282 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:53.282 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.282 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.282 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:53.540 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:53.540 07:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:53.797 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:06:53.797 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:06:53.797 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:54.055 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:54.055 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:54.055 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f lvol 150 00:06:54.313 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c15648bd-1586-4507-b2a8-a4c8f07a81cb 00:06:54.313 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:54.313 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:54.571 [2024-11-20 07:08:57.905709] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:54.571 [2024-11-20 07:08:57.905808] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:54.571 true 00:06:54.571 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:06:54.571 07:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:54.829 07:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:54.829 07:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:55.086 07:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c15648bd-1586-4507-b2a8-a4c8f07a81cb 00:06:55.345 07:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:55.603 [2024-11-20 07:08:58.989014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.603 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2406618 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2406618 /var/tmp/bdevperf.sock 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2406618 ']' 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:55.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.861 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:56.119 [2024-11-20 07:08:59.316063] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:06:56.119 [2024-11-20 07:08:59.316143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406618 ] 00:06:56.119 [2024-11-20 07:08:59.380755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.119 [2024-11-20 07:08:59.438569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.377 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.377 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:06:56.377 07:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:56.635 Nvme0n1 00:06:56.635 07:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:56.893 [ 00:06:56.893 { 00:06:56.893 "name": "Nvme0n1", 00:06:56.893 "aliases": [ 00:06:56.893 "c15648bd-1586-4507-b2a8-a4c8f07a81cb" 00:06:56.893 ], 00:06:56.893 "product_name": "NVMe disk", 00:06:56.893 "block_size": 4096, 00:06:56.893 "num_blocks": 38912, 00:06:56.893 "uuid": "c15648bd-1586-4507-b2a8-a4c8f07a81cb", 00:06:56.893 "numa_id": 0, 00:06:56.893 "assigned_rate_limits": { 00:06:56.893 "rw_ios_per_sec": 0, 00:06:56.893 "rw_mbytes_per_sec": 0, 00:06:56.893 "r_mbytes_per_sec": 0, 00:06:56.893 "w_mbytes_per_sec": 0 00:06:56.893 }, 00:06:56.893 "claimed": false, 00:06:56.893 "zoned": false, 00:06:56.893 "supported_io_types": { 00:06:56.893 "read": true, 00:06:56.893 "write": true, 00:06:56.893 "unmap": true, 00:06:56.893 "flush": true, 00:06:56.893 "reset": true, 00:06:56.893 "nvme_admin": true, 00:06:56.893 "nvme_io": true, 00:06:56.893 "nvme_io_md": false, 00:06:56.893 "write_zeroes": true, 00:06:56.893 "zcopy": false, 00:06:56.893 "get_zone_info": false, 00:06:56.893 "zone_management": false, 00:06:56.893 "zone_append": false, 00:06:56.893 "compare": true, 00:06:56.893 "compare_and_write": true, 00:06:56.893 "abort": true, 00:06:56.893 "seek_hole": false, 00:06:56.893 "seek_data": false, 00:06:56.893 "copy": true, 00:06:56.893 "nvme_iov_md": false 00:06:56.893 }, 00:06:56.893 "memory_domains": [ 00:06:56.893 { 00:06:56.893 "dma_device_id": "system", 00:06:56.893 "dma_device_type": 1 00:06:56.893 } 00:06:56.893 ], 00:06:56.893 "driver_specific": { 00:06:56.893 "nvme": [ 00:06:56.893 { 00:06:56.893 "trid": { 00:06:56.893 "trtype": "TCP", 00:06:56.893 "adrfam": "IPv4", 00:06:56.893 "traddr": "10.0.0.2", 00:06:56.893 "trsvcid": "4420", 00:06:56.893 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:56.893 }, 00:06:56.893 "ctrlr_data": { 00:06:56.893 "cntlid": 1, 00:06:56.893 "vendor_id": "0x8086", 00:06:56.893 "model_number": "SPDK bdev Controller", 00:06:56.893 "serial_number": "SPDK0", 00:06:56.893 "firmware_revision": "25.01", 00:06:56.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.893 "oacs": { 00:06:56.893 "security": 0, 00:06:56.893 "format": 0, 00:06:56.893 "firmware": 0, 00:06:56.893 "ns_manage": 0 00:06:56.893 }, 00:06:56.893 "multi_ctrlr": true, 00:06:56.893 "ana_reporting": false 00:06:56.893 }, 00:06:56.893 "vs": { 00:06:56.893 "nvme_version": "1.3" 00:06:56.893 }, 00:06:56.893 "ns_data": { 00:06:56.893 "id": 1, 00:06:56.893 "can_share": true 00:06:56.893 } 00:06:56.893 } 00:06:56.893 ], 00:06:56.893 "mp_policy": "active_passive" 00:06:56.893 } 00:06:56.893 } 00:06:56.893 ] 00:06:56.894 07:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2406783 00:06:56.894 07:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:56.894 07:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:57.152 Running I/O for 10 seconds... 00:06:58.086 Latency(us) 00:06:58.086 [2024-11-20T06:09:01.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.086 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:06:58.086 [2024-11-20T06:09:01.519Z] =================================================================================================================== 00:06:58.086 [2024-11-20T06:09:01.519Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:06:58.086 00:06:59.019 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:06:59.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.019 Nvme0n1 : 2.00 15113.50 59.04 0.00 0.00 0.00 0.00 0.00 00:06:59.019 [2024-11-20T06:09:02.452Z] =================================================================================================================== 00:06:59.019 [2024-11-20T06:09:02.452Z] Total : 15113.50 59.04 0.00 0.00 0.00 0.00 0.00 00:06:59.019 00:06:59.277 true 00:06:59.277 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:06:59.277 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:59.535 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:59.535 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:59.535 07:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2406783 00:07:00.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.101 Nvme0n1 : 3.00 15240.33 59.53 0.00 0.00 0.00 0.00 0.00 00:07:00.101 [2024-11-20T06:09:03.534Z] =================================================================================================================== 00:07:00.101 [2024-11-20T06:09:03.534Z] Total : 15240.33 59.53 0.00 0.00 0.00 0.00 0.00 00:07:00.101 00:07:01.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.034 Nvme0n1 : 4.00 15351.75 59.97 0.00 0.00 0.00 0.00 0.00 00:07:01.034 [2024-11-20T06:09:04.467Z] =================================================================================================================== 00:07:01.034 [2024-11-20T06:09:04.467Z] Total : 15351.75 59.97 0.00 0.00 0.00 0.00 0.00 00:07:01.034 00:07:02.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.407 Nvme0n1 : 5.00 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:07:02.407 [2024-11-20T06:09:05.840Z] =================================================================================================================== 00:07:02.407 [2024-11-20T06:09:05.840Z] Total : 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:07:02.407 00:07:03.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.341 Nvme0n1 : 6.00 15505.00 60.57 0.00 0.00 0.00 0.00 0.00 00:07:03.341 [2024-11-20T06:09:06.774Z] =================================================================================================================== 00:07:03.341 [2024-11-20T06:09:06.774Z] Total : 15505.00 60.57 0.00 0.00 0.00 0.00 0.00 00:07:03.341 00:07:04.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.274 Nvme0n1 : 7.00 15539.71 60.70 0.00 0.00 0.00 0.00 0.00 00:07:04.274 [2024-11-20T06:09:07.707Z] =================================================================================================================== 00:07:04.274 [2024-11-20T06:09:07.707Z] Total : 15539.71 60.70 0.00 0.00 0.00 0.00 0.00 00:07:04.274 00:07:05.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.207 Nvme0n1 : 8.00 15581.62 60.87 0.00 0.00 0.00 0.00 0.00 00:07:05.207 [2024-11-20T06:09:08.640Z] =================================================================================================================== 00:07:05.207 [2024-11-20T06:09:08.640Z] Total : 15581.62 60.87 0.00 0.00 0.00 0.00 0.00 00:07:05.207 00:07:06.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.141 Nvme0n1 : 9.00 15621.44 61.02 0.00 0.00 0.00 0.00 0.00 00:07:06.141 [2024-11-20T06:09:09.574Z] =================================================================================================================== 00:07:06.141 [2024-11-20T06:09:09.574Z] Total : 15621.44 61.02 0.00 0.00 0.00 0.00 0.00 00:07:06.141 00:07:07.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.076 Nvme0n1 : 10.00 15621.40 61.02 0.00 0.00 0.00 0.00 0.00 00:07:07.076 [2024-11-20T06:09:10.509Z] =================================================================================================================== 00:07:07.076 [2024-11-20T06:09:10.509Z] Total : 15621.40 61.02 0.00 0.00 0.00 0.00 0.00 00:07:07.076 00:07:07.076 00:07:07.076 Latency(us) 00:07:07.076 [2024-11-20T06:09:10.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.076 Nvme0n1 : 10.00 15627.86 61.05 0.00 0.00 8185.83 2657.85 16019.91 00:07:07.076 [2024-11-20T06:09:10.509Z] =================================================================================================================== 00:07:07.076 [2024-11-20T06:09:10.509Z] Total : 15627.86 61.05 0.00 0.00 8185.83 2657.85 16019.91 00:07:07.076 { 00:07:07.076 "results": [ 00:07:07.076 { 00:07:07.076 "job": "Nvme0n1", 00:07:07.076 "core_mask": "0x2", 00:07:07.076 "workload": "randwrite", 00:07:07.076 "status": "finished", 00:07:07.076 "queue_depth": 128, 00:07:07.076 "io_size": 4096, 00:07:07.076 "runtime": 10.004055, 00:07:07.076 "iops": 15627.862901593404, 00:07:07.076 "mibps": 61.046339459349234, 00:07:07.076 "io_failed": 0, 00:07:07.076 "io_timeout": 0, 00:07:07.076 "avg_latency_us": 8185.826745449317, 00:07:07.076 "min_latency_us": 2657.8488888888887, 00:07:07.076 "max_latency_us": 16019.91111111111 00:07:07.076 } 00:07:07.076 ], 00:07:07.076 "core_count": 1 00:07:07.076 } 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2406618 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2406618 ']' 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2406618 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2406618 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2406618' 00:07:07.076 killing process with pid 2406618 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2406618 00:07:07.076 Received shutdown signal, test time was about 10.000000 seconds 00:07:07.076 00:07:07.076 Latency(us) 00:07:07.076 [2024-11-20T06:09:10.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.076 [2024-11-20T06:09:10.509Z] =================================================================================================================== 00:07:07.076 [2024-11-20T06:09:10.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:07.076 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2406618 00:07:07.334 07:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:07.591 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.156 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:08.156 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:08.156 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:08.156 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:08.157 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:08.414 [2024-11-20 07:09:11.785063] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:08.414 07:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:08.672 request: 00:07:08.672 { 00:07:08.672 "uuid": "0be60448-6f39-40a0-9ecc-a3adf9d13c4f", 00:07:08.672 "method": "bdev_lvol_get_lvstores", 00:07:08.672 "req_id": 1 00:07:08.672 } 00:07:08.672 Got JSON-RPC error response 00:07:08.672 response: 00:07:08.672 { 00:07:08.672 "code": -19, 00:07:08.672 "message": "No such device" 00:07:08.672 } 00:07:08.672 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:08.672 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.672 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.672 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.672 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.931 aio_bdev 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c15648bd-1586-4507-b2a8-a4c8f07a81cb 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=c15648bd-1586-4507-b2a8-a4c8f07a81cb 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:08.931 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:09.497 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c15648bd-1586-4507-b2a8-a4c8f07a81cb -t 2000 00:07:09.497 [ 00:07:09.497 { 00:07:09.497 "name": "c15648bd-1586-4507-b2a8-a4c8f07a81cb", 00:07:09.497 "aliases": [ 00:07:09.497 "lvs/lvol" 00:07:09.497 ], 00:07:09.497 "product_name": "Logical Volume", 00:07:09.497 "block_size": 4096, 00:07:09.497 "num_blocks": 38912, 00:07:09.497 "uuid": "c15648bd-1586-4507-b2a8-a4c8f07a81cb", 00:07:09.497 "assigned_rate_limits": { 00:07:09.497 "rw_ios_per_sec": 0, 00:07:09.497 "rw_mbytes_per_sec": 0, 00:07:09.497 "r_mbytes_per_sec": 0, 00:07:09.497 "w_mbytes_per_sec": 0 00:07:09.497 }, 00:07:09.497 "claimed": false, 00:07:09.497 "zoned": false, 00:07:09.497 "supported_io_types": { 00:07:09.497 "read": true, 00:07:09.497 "write": true, 00:07:09.497 "unmap": true, 00:07:09.497 "flush": false, 00:07:09.497 "reset": true, 00:07:09.497 "nvme_admin": false, 00:07:09.497 "nvme_io": false, 00:07:09.497 "nvme_io_md": false, 00:07:09.497 "write_zeroes": true, 00:07:09.497 "zcopy": false, 00:07:09.497 "get_zone_info": false, 00:07:09.497 "zone_management": false, 00:07:09.497 "zone_append": false, 00:07:09.497 "compare": false, 00:07:09.497 "compare_and_write": false, 00:07:09.497 "abort": false, 00:07:09.497 "seek_hole": true, 00:07:09.497 "seek_data": true, 00:07:09.497 "copy": false, 00:07:09.497 "nvme_iov_md": false 00:07:09.497 }, 00:07:09.497 "driver_specific": { 00:07:09.497 "lvol": { 00:07:09.497 "lvol_store_uuid": "0be60448-6f39-40a0-9ecc-a3adf9d13c4f", 00:07:09.497 "base_bdev": "aio_bdev", 00:07:09.497 "thin_provision": false, 00:07:09.497 "num_allocated_clusters": 38, 00:07:09.497 "snapshot": false, 00:07:09.497 "clone": false, 00:07:09.497 "esnap_clone": false 00:07:09.497 } 00:07:09.497 } 00:07:09.497 } 00:07:09.497 ] 00:07:09.497 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:09.497 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:09.497 07:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:09.755 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:09.755 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:09.755 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:10.013 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:10.013 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c15648bd-1586-4507-b2a8-a4c8f07a81cb 00:07:10.579 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0be60448-6f39-40a0-9ecc-a3adf9d13c4f 00:07:10.579 07:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.836 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.095 00:07:11.095 real 0m17.757s 00:07:11.095 user 0m17.309s 00:07:11.095 sys 0m1.876s 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 ************************************ 00:07:11.095 END TEST lvs_grow_clean 00:07:11.095 ************************************ 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.095 ************************************ 00:07:11.095 START TEST lvs_grow_dirty 00:07:11.095 ************************************ 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.095 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.353 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:11.353 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:11.611 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:11.611 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:11.611 07:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:11.871 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:11.871 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:11.871 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b266393d-3cf3-4cb8-81b2-e9442168f81b lvol 150 00:07:12.186 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:12.186 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.186 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.447 [2024-11-20 07:09:15.705729] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.447 [2024-11-20 07:09:15.705829] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.447 true 00:07:12.447 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:12.447 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:12.706 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:12.706 07:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.965 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:13.225 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.485 [2024-11-20 07:09:16.805072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.485 07:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2409423 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2409423 /var/tmp/bdevperf.sock 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2409423 ']' 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.744 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.744 [2024-11-20 07:09:17.133411] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:13.744 [2024-11-20 07:09:17.133497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2409423 ] 00:07:14.002 [2024-11-20 07:09:17.199522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.002 [2024-11-20 07:09:17.256347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.002 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.002 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:14.002 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:14.570 Nvme0n1 00:07:14.570 07:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.827 [ 00:07:14.827 { 00:07:14.827 "name": "Nvme0n1", 00:07:14.827 "aliases": [ 00:07:14.827 "e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a" 00:07:14.827 ], 00:07:14.827 "product_name": "NVMe disk", 00:07:14.827 "block_size": 4096, 00:07:14.827 "num_blocks": 38912, 00:07:14.827 "uuid": "e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a", 00:07:14.827 "numa_id": 0, 00:07:14.827 "assigned_rate_limits": { 00:07:14.827 "rw_ios_per_sec": 0, 00:07:14.827 "rw_mbytes_per_sec": 0, 00:07:14.827 "r_mbytes_per_sec": 0, 00:07:14.827 "w_mbytes_per_sec": 0 00:07:14.827 }, 00:07:14.827 "claimed": false, 00:07:14.827 "zoned": false, 00:07:14.827 "supported_io_types": { 00:07:14.827 "read": true, 00:07:14.827 "write": true, 00:07:14.827 "unmap": true, 00:07:14.827 "flush": true, 00:07:14.827 "reset": true, 00:07:14.827 "nvme_admin": true, 00:07:14.827 "nvme_io": true, 00:07:14.827 "nvme_io_md": false, 00:07:14.827 "write_zeroes": true, 00:07:14.827 "zcopy": false, 00:07:14.827 "get_zone_info": false, 00:07:14.827 "zone_management": false, 00:07:14.827 "zone_append": false, 00:07:14.827 "compare": true, 00:07:14.827 "compare_and_write": true, 00:07:14.827 "abort": true, 00:07:14.827 "seek_hole": false, 00:07:14.827 "seek_data": false, 00:07:14.827 "copy": true, 00:07:14.827 "nvme_iov_md": false 00:07:14.827 }, 00:07:14.827 "memory_domains": [ 00:07:14.827 { 00:07:14.827 "dma_device_id": "system", 00:07:14.827 "dma_device_type": 1 00:07:14.827 } 00:07:14.827 ], 00:07:14.827 "driver_specific": { 00:07:14.827 "nvme": [ 00:07:14.827 { 00:07:14.827 "trid": { 00:07:14.827 "trtype": "TCP", 00:07:14.827 "adrfam": "IPv4", 00:07:14.827 "traddr": "10.0.0.2", 00:07:14.827 "trsvcid": "4420", 00:07:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.827 }, 00:07:14.827 "ctrlr_data": { 00:07:14.827 "cntlid": 1, 00:07:14.827 "vendor_id": "0x8086", 00:07:14.827 "model_number": "SPDK bdev Controller", 00:07:14.827 "serial_number": "SPDK0", 00:07:14.827 "firmware_revision": "25.01", 00:07:14.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.827 "oacs": { 00:07:14.827 "security": 0, 00:07:14.827 "format": 0, 00:07:14.827 "firmware": 0, 00:07:14.827 "ns_manage": 0 00:07:14.827 }, 00:07:14.827 "multi_ctrlr": true, 00:07:14.827 "ana_reporting": false 00:07:14.827 }, 00:07:14.827 "vs": { 00:07:14.827 "nvme_version": "1.3" 00:07:14.827 }, 00:07:14.827 "ns_data": { 00:07:14.827 "id": 1, 00:07:14.827 "can_share": true 00:07:14.827 } 00:07:14.827 } 00:07:14.827 ], 00:07:14.827 "mp_policy": "active_passive" 00:07:14.827 } 00:07:14.827 } 00:07:14.827 ] 00:07:14.827 07:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2409557 00:07:14.827 07:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.827 07:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.827 Running I/O for 10 seconds... 00:07:16.207 Latency(us) 00:07:16.207 [2024-11-20T06:09:19.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.207 Nvme0n1 : 1.00 15115.00 59.04 0.00 0.00 0.00 0.00 0.00 00:07:16.207 [2024-11-20T06:09:19.640Z] =================================================================================================================== 00:07:16.207 [2024-11-20T06:09:19.640Z] Total : 15115.00 59.04 0.00 0.00 0.00 0.00 0.00 00:07:16.207 00:07:16.774 07:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:17.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.031 Nvme0n1 : 2.00 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:07:17.031 [2024-11-20T06:09:20.464Z] =================================================================================================================== 00:07:17.031 [2024-11-20T06:09:20.464Z] Total : 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:07:17.031 00:07:17.031 true 00:07:17.031 07:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:17.031 07:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:17.291 07:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:17.291 07:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:17.291 07:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2409557 00:07:17.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.860 Nvme0n1 : 3.00 15284.00 59.70 0.00 0.00 0.00 0.00 0.00 00:07:17.860 [2024-11-20T06:09:21.293Z] =================================================================================================================== 00:07:17.860 [2024-11-20T06:09:21.293Z] Total : 15284.00 59.70 0.00 0.00 0.00 0.00 0.00 00:07:17.860 00:07:18.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.799 Nvme0n1 : 4.00 15386.75 60.10 0.00 0.00 0.00 0.00 0.00 00:07:18.799 [2024-11-20T06:09:22.232Z] =================================================================================================================== 00:07:18.799 [2024-11-20T06:09:22.232Z] Total : 15386.75 60.10 0.00 0.00 0.00 0.00 0.00 00:07:18.799 00:07:20.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.177 Nvme0n1 : 5.00 15473.20 60.44 0.00 0.00 0.00 0.00 0.00 00:07:20.177 [2024-11-20T06:09:23.610Z] =================================================================================================================== 00:07:20.177 [2024-11-20T06:09:23.610Z] Total : 15473.20 60.44 0.00 0.00 0.00 0.00 0.00 00:07:20.177 00:07:20.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.809 Nvme0n1 : 6.00 15572.17 60.83 0.00 0.00 0.00 0.00 0.00 00:07:20.809 [2024-11-20T06:09:24.242Z] =================================================================================================================== 00:07:20.809 [2024-11-20T06:09:24.242Z] Total : 15572.17 60.83 0.00 0.00 0.00 0.00 0.00 00:07:20.809 00:07:22.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.186 Nvme0n1 : 7.00 15634.43 61.07 0.00 0.00 0.00 0.00 0.00 00:07:22.186 [2024-11-20T06:09:25.619Z] =================================================================================================================== 00:07:22.186 [2024-11-20T06:09:25.619Z] Total : 15634.43 61.07 0.00 0.00 0.00 0.00 0.00 00:07:22.186 00:07:23.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.127 Nvme0n1 : 8.00 15689.25 61.29 0.00 0.00 0.00 0.00 0.00 00:07:23.127 [2024-11-20T06:09:26.560Z] =================================================================================================================== 00:07:23.127 [2024-11-20T06:09:26.560Z] Total : 15689.25 61.29 0.00 0.00 0.00 0.00 0.00 00:07:23.127 00:07:24.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.067 Nvme0n1 : 9.00 15719.00 61.40 0.00 0.00 0.00 0.00 0.00 00:07:24.067 [2024-11-20T06:09:27.500Z] =================================================================================================================== 00:07:24.067 [2024-11-20T06:09:27.500Z] Total : 15719.00 61.40 0.00 0.00 0.00 0.00 0.00 00:07:24.067 00:07:25.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.003 Nvme0n1 : 10.00 15735.50 61.47 0.00 0.00 0.00 0.00 0.00 00:07:25.003 [2024-11-20T06:09:28.436Z] =================================================================================================================== 00:07:25.003 [2024-11-20T06:09:28.436Z] Total : 15735.50 61.47 0.00 0.00 0.00 0.00 0.00 00:07:25.003 00:07:25.003 00:07:25.003 Latency(us) 00:07:25.003 [2024-11-20T06:09:28.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.003 Nvme0n1 : 10.00 15742.61 61.49 0.00 0.00 8126.05 2633.58 15728.64 00:07:25.003 [2024-11-20T06:09:28.436Z] =================================================================================================================== 00:07:25.003 [2024-11-20T06:09:28.436Z] Total : 15742.61 61.49 0.00 0.00 8126.05 2633.58 15728.64 00:07:25.003 { 00:07:25.003 "results": [ 00:07:25.003 { 00:07:25.003 "job": "Nvme0n1", 00:07:25.003 "core_mask": "0x2", 00:07:25.004 "workload": "randwrite", 00:07:25.004 "status": "finished", 00:07:25.004 "queue_depth": 128, 00:07:25.004 "io_size": 4096, 00:07:25.004 "runtime": 10.003612, 00:07:25.004 "iops": 15742.613767907033, 00:07:25.004 "mibps": 61.494585030886846, 00:07:25.004 "io_failed": 0, 00:07:25.004 "io_timeout": 0, 00:07:25.004 "avg_latency_us": 8126.047984504382, 00:07:25.004 "min_latency_us": 2633.5762962962963, 00:07:25.004 "max_latency_us": 15728.64 00:07:25.004 } 00:07:25.004 ], 00:07:25.004 "core_count": 1 00:07:25.004 } 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2409423 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2409423 ']' 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2409423 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2409423 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2409423' 00:07:25.004 killing process with pid 2409423 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2409423 00:07:25.004 Received shutdown signal, test time was about 10.000000 seconds 00:07:25.004 00:07:25.004 Latency(us) 00:07:25.004 [2024-11-20T06:09:28.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.004 [2024-11-20T06:09:28.437Z] =================================================================================================================== 00:07:25.004 [2024-11-20T06:09:28.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:25.004 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2409423 00:07:25.263 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.522 07:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.781 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:25.781 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2406178 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2406178 00:07:26.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2406178 Killed "${NVMF_APP[@]}" "$@" 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2410904 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2410904 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2410904 ']' 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.040 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.040 [2024-11-20 07:09:29.451139] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:26.040 [2024-11-20 07:09:29.451231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.299 [2024-11-20 07:09:29.525648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.299 [2024-11-20 07:09:29.583523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.299 [2024-11-20 07:09:29.583579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.299 [2024-11-20 07:09:29.583607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.299 [2024-11-20 07:09:29.583619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.299 [2024-11-20 07:09:29.583629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.299 [2024-11-20 07:09:29.584227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.299 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.299 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:26.299 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.300 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.300 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.300 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.300 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.558 [2024-11-20 07:09:29.981559] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:26.558 [2024-11-20 07:09:29.981713] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:26.558 [2024-11-20 07:09:29.981761] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:26.817 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:26.817 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:26.817 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:26.817 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:26.817 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:26.817 07:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:26.817 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:26.817 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.076 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a -t 2000 00:07:27.335 [ 00:07:27.335 { 00:07:27.335 "name": "e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a", 00:07:27.335 "aliases": [ 00:07:27.335 "lvs/lvol" 00:07:27.335 ], 00:07:27.335 "product_name": "Logical Volume", 00:07:27.335 "block_size": 4096, 00:07:27.335 "num_blocks": 38912, 00:07:27.335 "uuid": "e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a", 00:07:27.335 "assigned_rate_limits": { 00:07:27.335 "rw_ios_per_sec": 0, 00:07:27.335 "rw_mbytes_per_sec": 0, 00:07:27.335 "r_mbytes_per_sec": 0, 00:07:27.335 "w_mbytes_per_sec": 0 00:07:27.335 }, 00:07:27.335 "claimed": false, 00:07:27.335 "zoned": false, 00:07:27.335 "supported_io_types": { 00:07:27.335 "read": true, 00:07:27.335 "write": true, 00:07:27.335 "unmap": true, 00:07:27.335 "flush": false, 00:07:27.335 "reset": true, 00:07:27.335 "nvme_admin": false, 00:07:27.335 "nvme_io": false, 00:07:27.335 "nvme_io_md": false, 00:07:27.335 "write_zeroes": true, 00:07:27.335 "zcopy": false, 00:07:27.335 "get_zone_info": false, 00:07:27.335 "zone_management": false, 00:07:27.335 "zone_append": false, 00:07:27.335 "compare": false, 00:07:27.335 "compare_and_write": false, 00:07:27.335 "abort": false, 00:07:27.335 "seek_hole": true, 00:07:27.335 "seek_data": true, 00:07:27.335 "copy": false, 00:07:27.335 "nvme_iov_md": false 00:07:27.335 }, 00:07:27.335 "driver_specific": { 00:07:27.335 "lvol": { 00:07:27.335 "lvol_store_uuid": "b266393d-3cf3-4cb8-81b2-e9442168f81b", 00:07:27.335 "base_bdev": "aio_bdev", 00:07:27.335 "thin_provision": false, 00:07:27.335 "num_allocated_clusters": 38, 00:07:27.335 "snapshot": false, 00:07:27.335 "clone": false, 00:07:27.335 "esnap_clone": false 00:07:27.335 } 00:07:27.335 } 00:07:27.335 } 00:07:27.335 ] 00:07:27.335 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:27.335 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:27.335 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:27.594 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:27.594 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:27.594 07:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:27.853 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:27.853 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.113 [2024-11-20 07:09:31.363147] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.113 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:28.372 request: 00:07:28.372 { 00:07:28.372 "uuid": "b266393d-3cf3-4cb8-81b2-e9442168f81b", 00:07:28.372 "method": "bdev_lvol_get_lvstores", 00:07:28.372 "req_id": 1 00:07:28.372 } 00:07:28.372 Got JSON-RPC error response 00:07:28.372 response: 00:07:28.372 { 00:07:28.372 "code": -19, 00:07:28.372 "message": "No such device" 00:07:28.372 } 00:07:28.372 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:28.372 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.372 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.372 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.372 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.633 aio_bdev 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:28.633 07:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:28.891 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a -t 2000 00:07:29.154 [ 00:07:29.154 { 00:07:29.154 "name": "e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a", 00:07:29.154 "aliases": [ 00:07:29.154 "lvs/lvol" 00:07:29.154 ], 00:07:29.154 "product_name": "Logical Volume", 00:07:29.154 "block_size": 4096, 00:07:29.154 "num_blocks": 38912, 00:07:29.154 "uuid": "e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a", 00:07:29.154 "assigned_rate_limits": { 00:07:29.154 "rw_ios_per_sec": 0, 00:07:29.154 "rw_mbytes_per_sec": 0, 00:07:29.154 "r_mbytes_per_sec": 0, 00:07:29.154 "w_mbytes_per_sec": 0 00:07:29.154 }, 00:07:29.154 "claimed": false, 00:07:29.154 "zoned": false, 00:07:29.154 "supported_io_types": { 00:07:29.154 "read": true, 00:07:29.154 "write": true, 00:07:29.154 "unmap": true, 00:07:29.154 "flush": false, 00:07:29.154 "reset": true, 00:07:29.154 "nvme_admin": false, 00:07:29.154 "nvme_io": false, 00:07:29.154 "nvme_io_md": false, 00:07:29.154 "write_zeroes": true, 00:07:29.154 "zcopy": false, 00:07:29.154 "get_zone_info": false, 00:07:29.154 "zone_management": false, 00:07:29.154 "zone_append": false, 00:07:29.154 "compare": false, 00:07:29.154 "compare_and_write": false, 00:07:29.154 "abort": false, 00:07:29.154 "seek_hole": true, 00:07:29.154 "seek_data": true, 00:07:29.154 "copy": false, 00:07:29.154 "nvme_iov_md": false 00:07:29.154 }, 00:07:29.154 "driver_specific": { 00:07:29.154 "lvol": { 00:07:29.154 "lvol_store_uuid": "b266393d-3cf3-4cb8-81b2-e9442168f81b", 00:07:29.154 "base_bdev": "aio_bdev", 00:07:29.154 "thin_provision": false, 00:07:29.154 "num_allocated_clusters": 38, 00:07:29.154 "snapshot": false, 00:07:29.154 "clone": false, 00:07:29.154 "esnap_clone": false 00:07:29.154 } 00:07:29.154 } 00:07:29.154 } 00:07:29.154 ] 00:07:29.154 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:29.154 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:29.154 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:29.413 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:29.413 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:29.413 07:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:29.671 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:29.671 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2c93aa6-a9c3-4bf2-b39f-274f0a22e36a 00:07:29.930 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b266393d-3cf3-4cb8-81b2-e9442168f81b 00:07:30.190 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:30.449 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.449 00:07:30.449 real 0m19.525s 00:07:30.449 user 0m48.811s 00:07:30.449 sys 0m4.902s 00:07:30.449 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.449 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.449 ************************************ 00:07:30.449 END TEST lvs_grow_dirty 00:07:30.449 ************************************ 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:30.707 nvmf_trace.0 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.707 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.707 rmmod nvme_tcp 00:07:30.707 rmmod nvme_fabrics 00:07:30.707 rmmod nvme_keyring 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2410904 ']' 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2410904 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2410904 ']' 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2410904 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.708 07:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2410904 00:07:30.708 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:30.708 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:30.708 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2410904' 00:07:30.708 killing process with pid 2410904 00:07:30.708 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2410904 00:07:30.708 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2410904 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.968 07:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:32.875 00:07:32.875 real 0m42.845s 00:07:32.875 user 1m12.177s 00:07:32.875 sys 0m8.835s 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.875 ************************************ 00:07:32.875 END TEST nvmf_lvs_grow 00:07:32.875 ************************************ 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.875 07:09:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.135 ************************************ 00:07:33.135 START TEST nvmf_bdev_io_wait 00:07:33.135 ************************************ 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:33.135 * Looking for test storage... 00:07:33.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.135 --rc genhtml_branch_coverage=1 00:07:33.135 --rc genhtml_function_coverage=1 00:07:33.135 --rc genhtml_legend=1 00:07:33.135 --rc geninfo_all_blocks=1 00:07:33.135 --rc geninfo_unexecuted_blocks=1 00:07:33.135 00:07:33.135 ' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.135 --rc genhtml_branch_coverage=1 00:07:33.135 --rc genhtml_function_coverage=1 00:07:33.135 --rc genhtml_legend=1 00:07:33.135 --rc geninfo_all_blocks=1 00:07:33.135 --rc geninfo_unexecuted_blocks=1 00:07:33.135 00:07:33.135 ' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.135 --rc genhtml_branch_coverage=1 00:07:33.135 --rc genhtml_function_coverage=1 00:07:33.135 --rc genhtml_legend=1 00:07:33.135 --rc geninfo_all_blocks=1 00:07:33.135 --rc geninfo_unexecuted_blocks=1 00:07:33.135 00:07:33.135 ' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.135 --rc genhtml_branch_coverage=1 00:07:33.135 --rc genhtml_function_coverage=1 00:07:33.135 --rc genhtml_legend=1 00:07:33.135 --rc geninfo_all_blocks=1 00:07:33.135 --rc geninfo_unexecuted_blocks=1 00:07:33.135 00:07:33.135 ' 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.135 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.136 07:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:35.673 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:35.673 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:35.673 Found net devices under 0000:09:00.0: cvl_0_0 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:35.673 Found net devices under 0000:09:00.1: cvl_0_1 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.673 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:35.674 00:07:35.674 --- 10.0.0.2 ping statistics --- 00:07:35.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.674 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:35.674 00:07:35.674 --- 10.0.0.1 ping statistics --- 00:07:35.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.674 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2413455 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2413455 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2413455 ']' 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 [2024-11-20 07:09:38.712781] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:35.674 [2024-11-20 07:09:38.712886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.674 [2024-11-20 07:09:38.785471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.674 [2024-11-20 07:09:38.847077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.674 [2024-11-20 07:09:38.847143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.674 [2024-11-20 07:09:38.847166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.674 [2024-11-20 07:09:38.847177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.674 [2024-11-20 07:09:38.847186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.674 [2024-11-20 07:09:38.848884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.674 [2024-11-20 07:09:38.849010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.674 [2024-11-20 07:09:38.849113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.674 [2024-11-20 07:09:38.849117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.674 07:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 [2024-11-20 07:09:39.056683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 Malloc0 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.674 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.932 [2024-11-20 07:09:39.109176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2413513 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2413516 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.932 { 00:07:35.932 "params": { 00:07:35.932 "name": "Nvme$subsystem", 00:07:35.932 "trtype": "$TEST_TRANSPORT", 00:07:35.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.932 "adrfam": "ipv4", 00:07:35.932 "trsvcid": "$NVMF_PORT", 00:07:35.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.932 "hdgst": ${hdgst:-false}, 00:07:35.932 "ddgst": ${ddgst:-false} 00:07:35.932 }, 00:07:35.932 "method": "bdev_nvme_attach_controller" 00:07:35.932 } 00:07:35.932 EOF 00:07:35.932 )") 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2413519 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2413523 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.932 { 00:07:35.932 "params": { 00:07:35.932 "name": "Nvme$subsystem", 00:07:35.932 "trtype": "$TEST_TRANSPORT", 00:07:35.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.932 "adrfam": "ipv4", 00:07:35.932 "trsvcid": "$NVMF_PORT", 00:07:35.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.932 "hdgst": ${hdgst:-false}, 00:07:35.932 "ddgst": ${ddgst:-false} 00:07:35.932 }, 00:07:35.932 "method": "bdev_nvme_attach_controller" 00:07:35.932 } 00:07:35.932 EOF 00:07:35.932 )") 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:35.932 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.932 { 00:07:35.932 "params": { 00:07:35.932 "name": "Nvme$subsystem", 00:07:35.932 "trtype": "$TEST_TRANSPORT", 00:07:35.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.933 "adrfam": "ipv4", 00:07:35.933 "trsvcid": "$NVMF_PORT", 00:07:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.933 "hdgst": ${hdgst:-false}, 00:07:35.933 "ddgst": ${ddgst:-false} 00:07:35.933 }, 00:07:35.933 "method": "bdev_nvme_attach_controller" 00:07:35.933 } 00:07:35.933 EOF 00:07:35.933 )") 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.933 { 00:07:35.933 "params": { 00:07:35.933 "name": "Nvme$subsystem", 00:07:35.933 "trtype": "$TEST_TRANSPORT", 00:07:35.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.933 "adrfam": "ipv4", 00:07:35.933 "trsvcid": "$NVMF_PORT", 00:07:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.933 "hdgst": ${hdgst:-false}, 00:07:35.933 "ddgst": ${ddgst:-false} 00:07:35.933 }, 00:07:35.933 "method": "bdev_nvme_attach_controller" 00:07:35.933 } 00:07:35.933 EOF 00:07:35.933 )") 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2413513 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.933 "params": { 00:07:35.933 "name": "Nvme1", 00:07:35.933 "trtype": "tcp", 00:07:35.933 "traddr": "10.0.0.2", 00:07:35.933 "adrfam": "ipv4", 00:07:35.933 "trsvcid": "4420", 00:07:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.933 "hdgst": false, 00:07:35.933 "ddgst": false 00:07:35.933 }, 00:07:35.933 "method": "bdev_nvme_attach_controller" 00:07:35.933 }' 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.933 "params": { 00:07:35.933 "name": "Nvme1", 00:07:35.933 "trtype": "tcp", 00:07:35.933 "traddr": "10.0.0.2", 00:07:35.933 "adrfam": "ipv4", 00:07:35.933 "trsvcid": "4420", 00:07:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.933 "hdgst": false, 00:07:35.933 "ddgst": false 00:07:35.933 }, 00:07:35.933 "method": "bdev_nvme_attach_controller" 00:07:35.933 }' 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.933 "params": { 00:07:35.933 "name": "Nvme1", 00:07:35.933 "trtype": "tcp", 00:07:35.933 "traddr": "10.0.0.2", 00:07:35.933 "adrfam": "ipv4", 00:07:35.933 "trsvcid": "4420", 00:07:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.933 "hdgst": false, 00:07:35.933 "ddgst": false 00:07:35.933 }, 00:07:35.933 "method": "bdev_nvme_attach_controller" 00:07:35.933 }' 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.933 07:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.933 "params": { 00:07:35.933 "name": "Nvme1", 00:07:35.933 "trtype": "tcp", 00:07:35.933 "traddr": "10.0.0.2", 00:07:35.933 "adrfam": "ipv4", 00:07:35.933 "trsvcid": "4420", 00:07:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.933 "hdgst": false, 00:07:35.933 "ddgst": false 00:07:35.933 }, 00:07:35.933 "method": "bdev_nvme_attach_controller" 00:07:35.933 }' 00:07:35.933 [2024-11-20 07:09:39.160339] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:35.933 [2024-11-20 07:09:39.160337] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:35.933 [2024-11-20 07:09:39.160337] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:35.933 [2024-11-20 07:09:39.160431] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:09:39.160432] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:09:39.160431] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:35.933 --proc-type=auto ] 00:07:35.933 --proc-type=auto ] 00:07:35.933 [2024-11-20 07:09:39.161022] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:35.933 [2024-11-20 07:09:39.161088] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:35.933 [2024-11-20 07:09:39.350535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.191 [2024-11-20 07:09:39.406598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:36.191 [2024-11-20 07:09:39.455756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.191 [2024-11-20 07:09:39.512299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.191 [2024-11-20 07:09:39.559340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.191 [2024-11-20 07:09:39.617142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:36.450 [2024-11-20 07:09:39.637856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.450 [2024-11-20 07:09:39.690181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:36.450 Running I/O for 1 seconds... 00:07:36.450 Running I/O for 1 seconds... 00:07:36.450 Running I/O for 1 seconds... 00:07:36.450 Running I/O for 1 seconds... 00:07:37.386 9998.00 IOPS, 39.05 MiB/s 00:07:37.386 Latency(us) 00:07:37.386 [2024-11-20T06:09:40.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.386 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:37.386 Nvme1n1 : 1.01 10041.52 39.22 0.00 0.00 12690.51 7427.41 20583.16 00:07:37.386 [2024-11-20T06:09:40.819Z] =================================================================================================================== 00:07:37.386 [2024-11-20T06:09:40.819Z] Total : 10041.52 39.22 0.00 0.00 12690.51 7427.41 20583.16 00:07:37.669 193104.00 IOPS, 754.31 MiB/s [2024-11-20T06:09:41.102Z] 8208.00 IOPS, 32.06 MiB/s 00:07:37.669 Latency(us) 00:07:37.669 [2024-11-20T06:09:41.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.669 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:37.669 Nvme1n1 : 1.00 192739.24 752.89 0.00 0.00 660.54 285.20 1868.99 00:07:37.669 [2024-11-20T06:09:41.102Z] =================================================================================================================== 00:07:37.669 [2024-11-20T06:09:41.102Z] Total : 192739.24 752.89 0.00 0.00 660.54 285.20 1868.99 00:07:37.669 8505.00 IOPS, 33.22 MiB/s 00:07:37.669 Latency(us) 00:07:37.669 [2024-11-20T06:09:41.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.669 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:37.669 Nvme1n1 : 1.01 8270.13 32.31 0.00 0.00 15404.94 5801.15 24078.41 00:07:37.669 [2024-11-20T06:09:41.102Z] =================================================================================================================== 00:07:37.669 [2024-11-20T06:09:41.102Z] Total : 8270.13 32.31 0.00 0.00 15404.94 5801.15 24078.41 00:07:37.669 00:07:37.669 Latency(us) 00:07:37.669 [2024-11-20T06:09:41.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.669 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:37.669 Nvme1n1 : 1.01 8585.12 33.54 0.00 0.00 14856.40 4927.34 26602.76 00:07:37.669 [2024-11-20T06:09:41.102Z] =================================================================================================================== 00:07:37.669 [2024-11-20T06:09:41.102Z] Total : 8585.12 33.54 0.00 0.00 14856.40 4927.34 26602.76 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2413516 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2413519 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2413523 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.669 rmmod nvme_tcp 00:07:37.669 rmmod nvme_fabrics 00:07:37.669 rmmod nvme_keyring 00:07:37.669 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2413455 ']' 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2413455 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2413455 ']' 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2413455 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2413455 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2413455' 00:07:37.928 killing process with pid 2413455 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2413455 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2413455 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.928 07:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.530 00:07:40.530 real 0m7.056s 00:07:40.530 user 0m15.635s 00:07:40.530 sys 0m3.501s 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:40.530 ************************************ 00:07:40.530 END TEST nvmf_bdev_io_wait 00:07:40.530 ************************************ 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.530 ************************************ 00:07:40.530 START TEST nvmf_queue_depth 00:07:40.530 ************************************ 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:40.530 * Looking for test storage... 00:07:40.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:40.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.530 --rc genhtml_branch_coverage=1 00:07:40.530 --rc genhtml_function_coverage=1 00:07:40.530 --rc genhtml_legend=1 00:07:40.530 --rc geninfo_all_blocks=1 00:07:40.530 --rc geninfo_unexecuted_blocks=1 00:07:40.530 00:07:40.530 ' 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:40.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.530 --rc genhtml_branch_coverage=1 00:07:40.530 --rc genhtml_function_coverage=1 00:07:40.530 --rc genhtml_legend=1 00:07:40.530 --rc geninfo_all_blocks=1 00:07:40.530 --rc geninfo_unexecuted_blocks=1 00:07:40.530 00:07:40.530 ' 00:07:40.530 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:40.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.531 --rc genhtml_branch_coverage=1 00:07:40.531 --rc genhtml_function_coverage=1 00:07:40.531 --rc genhtml_legend=1 00:07:40.531 --rc geninfo_all_blocks=1 00:07:40.531 --rc geninfo_unexecuted_blocks=1 00:07:40.531 00:07:40.531 ' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:40.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.531 --rc genhtml_branch_coverage=1 00:07:40.531 --rc genhtml_function_coverage=1 00:07:40.531 --rc genhtml_legend=1 00:07:40.531 --rc geninfo_all_blocks=1 00:07:40.531 --rc geninfo_unexecuted_blocks=1 00:07:40.531 00:07:40.531 ' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.531 07:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:42.438 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:42.438 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:42.438 Found net devices under 0000:09:00.0: cvl_0_0 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:42.438 Found net devices under 0000:09:00.1: cvl_0_1 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.438 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:42.697 00:07:42.697 --- 10.0.0.2 ping statistics --- 00:07:42.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.697 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:42.697 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:42.697 00:07:42.697 --- 10.0.0.1 ping statistics --- 00:07:42.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.697 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2415807 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2415807 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2415807 ']' 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.698 07:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.698 [2024-11-20 07:09:46.021915] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:42.698 [2024-11-20 07:09:46.021999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.698 [2024-11-20 07:09:46.094855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.956 [2024-11-20 07:09:46.150322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.956 [2024-11-20 07:09:46.150388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.956 [2024-11-20 07:09:46.150415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.956 [2024-11-20 07:09:46.150426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.956 [2024-11-20 07:09:46.150435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.956 [2024-11-20 07:09:46.151016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 [2024-11-20 07:09:46.292959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 Malloc0 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.956 [2024-11-20 07:09:46.342035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2415862 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2415862 /var/tmp/bdevperf.sock 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2415862 ']' 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:42.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.956 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.216 [2024-11-20 07:09:46.389460] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:07:43.216 [2024-11-20 07:09:46.389538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415862 ] 00:07:43.216 [2024-11-20 07:09:46.454598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.216 [2024-11-20 07:09:46.511902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.216 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.216 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:43.216 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:43.216 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.216 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.475 NVMe0n1 00:07:43.475 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.475 07:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:43.733 Running I/O for 10 seconds... 00:07:45.600 8199.00 IOPS, 32.03 MiB/s [2024-11-20T06:09:50.406Z] 8463.50 IOPS, 33.06 MiB/s [2024-11-20T06:09:51.341Z] 8467.00 IOPS, 33.07 MiB/s [2024-11-20T06:09:52.274Z] 8447.25 IOPS, 33.00 MiB/s [2024-11-20T06:09:53.207Z] 8509.20 IOPS, 33.24 MiB/s [2024-11-20T06:09:54.142Z] 8531.50 IOPS, 33.33 MiB/s [2024-11-20T06:09:55.075Z] 8561.57 IOPS, 33.44 MiB/s [2024-11-20T06:09:56.007Z] 8574.25 IOPS, 33.49 MiB/s [2024-11-20T06:09:57.381Z] 8624.67 IOPS, 33.69 MiB/s [2024-11-20T06:09:57.381Z] 8608.90 IOPS, 33.63 MiB/s 00:07:53.948 Latency(us) 00:07:53.948 [2024-11-20T06:09:57.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.948 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:53.948 Verification LBA range: start 0x0 length 0x4000 00:07:53.948 NVMe0n1 : 10.07 8646.69 33.78 0.00 0.00 117943.41 12815.93 74953.77 00:07:53.948 [2024-11-20T06:09:57.381Z] =================================================================================================================== 00:07:53.948 [2024-11-20T06:09:57.381Z] Total : 8646.69 33.78 0.00 0.00 117943.41 12815.93 74953.77 00:07:53.948 { 00:07:53.948 "results": [ 00:07:53.948 { 00:07:53.948 "job": "NVMe0n1", 00:07:53.948 "core_mask": "0x1", 00:07:53.948 "workload": "verify", 00:07:53.948 "status": "finished", 00:07:53.948 "verify_range": { 00:07:53.948 "start": 0, 00:07:53.948 "length": 16384 00:07:53.948 }, 00:07:53.948 "queue_depth": 1024, 00:07:53.948 "io_size": 4096, 00:07:53.948 "runtime": 10.070214, 00:07:53.948 "iops": 8646.688143866655, 00:07:53.948 "mibps": 33.77612556197912, 00:07:53.948 "io_failed": 0, 00:07:53.948 "io_timeout": 0, 00:07:53.948 "avg_latency_us": 117943.40697974223, 00:07:53.948 "min_latency_us": 12815.92888888889, 00:07:53.948 "max_latency_us": 74953.76592592592 00:07:53.948 } 00:07:53.948 ], 00:07:53.948 "core_count": 1 00:07:53.948 } 00:07:53.948 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2415862 00:07:53.948 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2415862 ']' 00:07:53.948 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2415862 00:07:53.948 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:07:53.948 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2415862 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2415862' 00:07:53.949 killing process with pid 2415862 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2415862 00:07:53.949 Received shutdown signal, test time was about 10.000000 seconds 00:07:53.949 00:07:53.949 Latency(us) 00:07:53.949 [2024-11-20T06:09:57.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.949 [2024-11-20T06:09:57.382Z] =================================================================================================================== 00:07:53.949 [2024-11-20T06:09:57.382Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2415862 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.949 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.949 rmmod nvme_tcp 00:07:53.949 rmmod nvme_fabrics 00:07:53.949 rmmod nvme_keyring 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2415807 ']' 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2415807 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2415807 ']' 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2415807 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.206 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2415807 00:07:54.207 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:54.207 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:54.207 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2415807' 00:07:54.207 killing process with pid 2415807 00:07:54.207 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2415807 00:07:54.207 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2415807 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.465 07:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.372 00:07:56.372 real 0m16.297s 00:07:56.372 user 0m22.716s 00:07:56.372 sys 0m3.226s 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.372 ************************************ 00:07:56.372 END TEST nvmf_queue_depth 00:07:56.372 ************************************ 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.372 ************************************ 00:07:56.372 START TEST nvmf_target_multipath 00:07:56.372 ************************************ 00:07:56.372 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:56.632 * Looking for test storage... 00:07:56.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:56.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.632 --rc genhtml_branch_coverage=1 00:07:56.632 --rc genhtml_function_coverage=1 00:07:56.632 --rc genhtml_legend=1 00:07:56.632 --rc geninfo_all_blocks=1 00:07:56.632 --rc geninfo_unexecuted_blocks=1 00:07:56.632 00:07:56.632 ' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:56.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.632 --rc genhtml_branch_coverage=1 00:07:56.632 --rc genhtml_function_coverage=1 00:07:56.632 --rc genhtml_legend=1 00:07:56.632 --rc geninfo_all_blocks=1 00:07:56.632 --rc geninfo_unexecuted_blocks=1 00:07:56.632 00:07:56.632 ' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:56.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.632 --rc genhtml_branch_coverage=1 00:07:56.632 --rc genhtml_function_coverage=1 00:07:56.632 --rc genhtml_legend=1 00:07:56.632 --rc geninfo_all_blocks=1 00:07:56.632 --rc geninfo_unexecuted_blocks=1 00:07:56.632 00:07:56.632 ' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:56.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.632 --rc genhtml_branch_coverage=1 00:07:56.632 --rc genhtml_function_coverage=1 00:07:56.632 --rc genhtml_legend=1 00:07:56.632 --rc geninfo_all_blocks=1 00:07:56.632 --rc geninfo_unexecuted_blocks=1 00:07:56.632 00:07:56.632 ' 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.632 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.633 07:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:59.167 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:59.167 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:59.167 Found net devices under 0000:09:00.0: cvl_0_0 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:59.167 Found net devices under 0000:09:00.1: cvl_0_1 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.167 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:07:59.168 00:07:59.168 --- 10.0.0.2 ping statistics --- 00:07:59.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.168 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:07:59.168 00:07:59.168 --- 10.0.0.1 ping statistics --- 00:07:59.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.168 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:59.168 only one NIC for nvmf test 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.168 rmmod nvme_tcp 00:07:59.168 rmmod nvme_fabrics 00:07:59.168 rmmod nvme_keyring 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.168 07:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.081 00:08:01.081 real 0m4.651s 00:08:01.081 user 0m0.946s 00:08:01.081 sys 0m1.726s 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:01.081 ************************************ 00:08:01.081 END TEST nvmf_target_multipath 00:08:01.081 ************************************ 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.081 ************************************ 00:08:01.081 START TEST nvmf_zcopy 00:08:01.081 ************************************ 00:08:01.081 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:01.339 * Looking for test storage... 00:08:01.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:01.339 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.340 --rc genhtml_branch_coverage=1 00:08:01.340 --rc genhtml_function_coverage=1 00:08:01.340 --rc genhtml_legend=1 00:08:01.340 --rc geninfo_all_blocks=1 00:08:01.340 --rc geninfo_unexecuted_blocks=1 00:08:01.340 00:08:01.340 ' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.340 --rc genhtml_branch_coverage=1 00:08:01.340 --rc genhtml_function_coverage=1 00:08:01.340 --rc genhtml_legend=1 00:08:01.340 --rc geninfo_all_blocks=1 00:08:01.340 --rc geninfo_unexecuted_blocks=1 00:08:01.340 00:08:01.340 ' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.340 --rc genhtml_branch_coverage=1 00:08:01.340 --rc genhtml_function_coverage=1 00:08:01.340 --rc genhtml_legend=1 00:08:01.340 --rc geninfo_all_blocks=1 00:08:01.340 --rc geninfo_unexecuted_blocks=1 00:08:01.340 00:08:01.340 ' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.340 --rc genhtml_branch_coverage=1 00:08:01.340 --rc genhtml_function_coverage=1 00:08:01.340 --rc genhtml_legend=1 00:08:01.340 --rc geninfo_all_blocks=1 00:08:01.340 --rc geninfo_unexecuted_blocks=1 00:08:01.340 00:08:01.340 ' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.340 07:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:03.880 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.880 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:03.881 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:03.881 Found net devices under 0000:09:00.0: cvl_0_0 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:03.881 Found net devices under 0000:09:00.1: cvl_0_1 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:08:03.881 00:08:03.881 --- 10.0.0.2 ping statistics --- 00:08:03.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.881 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:03.881 00:08:03.881 --- 10.0.0.1 ping statistics --- 00:08:03.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.881 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2421082 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2421082 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2421082 ']' 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.881 07:10:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.881 [2024-11-20 07:10:06.967877] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:08:03.881 [2024-11-20 07:10:06.967966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.881 [2024-11-20 07:10:07.039457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.881 [2024-11-20 07:10:07.097986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.881 [2024-11-20 07:10:07.098044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.881 [2024-11-20 07:10:07.098058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.881 [2024-11-20 07:10:07.098070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.881 [2024-11-20 07:10:07.098080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.881 [2024-11-20 07:10:07.098736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.881 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 [2024-11-20 07:10:07.256002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 [2024-11-20 07:10:07.272209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 malloc0 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.882 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.882 { 00:08:03.882 "params": { 00:08:03.882 "name": "Nvme$subsystem", 00:08:03.882 "trtype": "$TEST_TRANSPORT", 00:08:03.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.882 "adrfam": "ipv4", 00:08:03.882 "trsvcid": "$NVMF_PORT", 00:08:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.882 "hdgst": ${hdgst:-false}, 00:08:03.882 "ddgst": ${ddgst:-false} 00:08:03.882 }, 00:08:03.882 "method": "bdev_nvme_attach_controller" 00:08:03.882 } 00:08:03.882 EOF 00:08:03.882 )") 00:08:04.142 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:04.142 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:04.142 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:04.142 07:10:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.142 "params": { 00:08:04.142 "name": "Nvme1", 00:08:04.142 "trtype": "tcp", 00:08:04.142 "traddr": "10.0.0.2", 00:08:04.142 "adrfam": "ipv4", 00:08:04.142 "trsvcid": "4420", 00:08:04.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.142 "hdgst": false, 00:08:04.142 "ddgst": false 00:08:04.142 }, 00:08:04.142 "method": "bdev_nvme_attach_controller" 00:08:04.142 }' 00:08:04.142 [2024-11-20 07:10:07.357298] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:08:04.142 [2024-11-20 07:10:07.357389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421104 ] 00:08:04.142 [2024-11-20 07:10:07.424249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.142 [2024-11-20 07:10:07.483992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.400 Running I/O for 10 seconds... 00:08:06.705 5830.00 IOPS, 45.55 MiB/s [2024-11-20T06:10:11.072Z] 5866.50 IOPS, 45.83 MiB/s [2024-11-20T06:10:12.006Z] 5884.33 IOPS, 45.97 MiB/s [2024-11-20T06:10:13.082Z] 5896.00 IOPS, 46.06 MiB/s [2024-11-20T06:10:14.013Z] 5902.00 IOPS, 46.11 MiB/s [2024-11-20T06:10:14.945Z] 5901.00 IOPS, 46.10 MiB/s [2024-11-20T06:10:15.877Z] 5909.29 IOPS, 46.17 MiB/s [2024-11-20T06:10:17.251Z] 5911.75 IOPS, 46.19 MiB/s [2024-11-20T06:10:18.185Z] 5914.67 IOPS, 46.21 MiB/s [2024-11-20T06:10:18.185Z] 5915.60 IOPS, 46.22 MiB/s 00:08:14.752 Latency(us) 00:08:14.752 [2024-11-20T06:10:18.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.752 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:14.752 Verification LBA range: start 0x0 length 0x1000 00:08:14.752 Nvme1n1 : 10.02 5918.52 46.24 0.00 0.00 21569.20 3737.98 29321.29 00:08:14.752 [2024-11-20T06:10:18.185Z] =================================================================================================================== 00:08:14.752 [2024-11-20T06:10:18.185Z] Total : 5918.52 46.24 0.00 0.00 21569.20 3737.98 29321.29 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2422431 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.752 { 00:08:14.752 "params": { 00:08:14.752 "name": "Nvme$subsystem", 00:08:14.752 "trtype": "$TEST_TRANSPORT", 00:08:14.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.752 "adrfam": "ipv4", 00:08:14.752 "trsvcid": "$NVMF_PORT", 00:08:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.752 "hdgst": ${hdgst:-false}, 00:08:14.752 "ddgst": ${ddgst:-false} 00:08:14.752 }, 00:08:14.752 "method": "bdev_nvme_attach_controller" 00:08:14.752 } 00:08:14.752 EOF 00:08:14.752 )") 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:14.752 [2024-11-20 07:10:18.052411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.052457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:14.752 07:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.752 "params": { 00:08:14.752 "name": "Nvme1", 00:08:14.752 "trtype": "tcp", 00:08:14.752 "traddr": "10.0.0.2", 00:08:14.752 "adrfam": "ipv4", 00:08:14.752 "trsvcid": "4420", 00:08:14.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.752 "hdgst": false, 00:08:14.752 "ddgst": false 00:08:14.752 }, 00:08:14.752 "method": "bdev_nvme_attach_controller" 00:08:14.752 }' 00:08:14.752 [2024-11-20 07:10:18.060358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.060384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.068381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.068405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.076408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.076431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.084436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.084459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.092440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.092463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.093674] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:08:14.752 [2024-11-20 07:10:18.093731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422431 ] 00:08:14.752 [2024-11-20 07:10:18.100460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.100482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.108483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.108506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.116504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.116527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.124524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.124546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.132545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.132567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.140567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.140606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.148603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.148624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.156622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.156642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.162647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.752 [2024-11-20 07:10:18.164664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.164685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.172705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.172738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.752 [2024-11-20 07:10:18.180729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.752 [2024-11-20 07:10:18.180783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.188714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.188734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.196719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.196740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.204743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.204763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.212764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.212785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.220784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.220804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.225171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.010 [2024-11-20 07:10:18.228806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.228826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.236827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.236847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.244877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.244910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.252903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.252936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.260927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.260964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.268949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.268983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.276974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.277011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.284993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.285028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.292990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.293013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.301016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.301040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.309053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.309087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.317094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.317128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.325104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.325132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.333108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.333129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.341137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.341161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.349160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.349185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.357181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.357204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.365202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.365224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.373248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.373274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.381246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.381269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.010 [2024-11-20 07:10:18.389267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.010 [2024-11-20 07:10:18.389311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.011 [2024-11-20 07:10:18.397300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.011 [2024-11-20 07:10:18.397330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.011 [2024-11-20 07:10:18.405331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.011 [2024-11-20 07:10:18.405353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.011 [2024-11-20 07:10:18.413351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.011 [2024-11-20 07:10:18.413373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.011 [2024-11-20 07:10:18.421371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.011 [2024-11-20 07:10:18.421393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.011 [2024-11-20 07:10:18.429389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.011 [2024-11-20 07:10:18.429412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.011 [2024-11-20 07:10:18.437443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.011 [2024-11-20 07:10:18.437468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.445451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.445475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.453452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.453473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.461476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.461498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.469499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.469520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.477526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.477550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.485543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.485565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.493564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.493601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.501600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.501622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.509625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.509661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.517650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.517672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.525672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.525692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.533687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.533712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.541702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.541725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 Running I/O for 5 seconds... 00:08:15.268 [2024-11-20 07:10:18.553696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.553726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.563998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.564028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.575591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.575620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.586417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.586447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.597325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.597355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.608341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.608370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.619572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.619602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.630657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.630686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.642103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.642131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.652340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.652370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.662910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.662939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.673773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.268 [2024-11-20 07:10:18.673803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.268 [2024-11-20 07:10:18.686446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.269 [2024-11-20 07:10:18.686476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.269 [2024-11-20 07:10:18.695854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.269 [2024-11-20 07:10:18.695883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.707461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.707491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.717870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.717914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.728951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.728980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.741444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.741482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.751213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.751243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.761595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.761624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.772503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.772532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.784766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.784796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.794844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.794873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.805793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.805823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.816755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.816784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.827464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.827494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.838048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.838077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.848851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.848881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.859544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.859572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.870414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.870443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.883386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.883415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.895041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.895071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.904014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.904043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.915567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.915596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.925926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.925956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.936618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.936647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.527 [2024-11-20 07:10:18.947217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.527 [2024-11-20 07:10:18.947269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:18.958048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:18.958078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:18.968631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:18.968659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:18.979833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:18.979862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:18.993635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:18.993667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.004421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.004450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.015313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.015341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.028633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.028663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.038639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.038667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.049333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.049363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.060221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.060249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.070828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.070856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.082060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.082088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.092985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.093014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.104220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.104248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.115156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.115185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.127977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.128006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.138081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.138108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.148634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.148662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.159350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.159390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.170460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.170489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.181409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.181461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.192504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.192532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.203377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.203405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.786 [2024-11-20 07:10:19.214145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.786 [2024-11-20 07:10:19.214173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.226776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.226804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.236906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.236933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.247869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.247897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.258858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.258887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.269780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.269808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.282419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.282447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.292397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.292424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.302748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.302775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.313054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.313082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.323468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.323495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.334036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.334064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.345108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.345138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.356429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.356456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.367117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.367152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.377800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.377827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.388457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.388485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.399097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.399125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.409743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.409770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.420512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.420540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.433378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.433406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.443631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.443659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.454214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.454242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.044 [2024-11-20 07:10:19.465109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.044 [2024-11-20 07:10:19.465137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.478351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.478380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.488384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.488412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.499317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.499345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.510196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.510224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.521074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.521103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.532288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.532327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.543181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.543209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 11700.00 IOPS, 91.41 MiB/s [2024-11-20T06:10:19.735Z] [2024-11-20 07:10:19.554122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.554150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.564956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.302 [2024-11-20 07:10:19.564984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.302 [2024-11-20 07:10:19.577453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.577481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.587621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.587649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.598163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.598190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.608658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.608685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.619333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.619361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.630108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.630137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.642889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.642917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.652762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.652789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.663590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.663618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.675716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.675744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.685005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.685033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.695943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.695972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.706320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.706348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.717032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.717060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.303 [2024-11-20 07:10:19.729486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.303 [2024-11-20 07:10:19.729515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.739862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.739890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.750408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.750436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.761541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.761569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.771942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.771969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.782805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.782833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.795362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.795391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.805517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.805545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.816463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.816491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.827276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.827311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.838512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.838540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.849575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.849603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.860329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.860357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.871317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.871359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.882350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.882378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.894601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.894629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.904768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.904796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.915614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.915642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.928478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.928506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.940357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.940384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.949022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.949049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.962120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.962148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.972775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.972803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.561 [2024-11-20 07:10:19.983530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.561 [2024-11-20 07:10:19.983558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:19.994208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:19.994235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.004695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.004724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.016757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.016790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.032069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.032117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.045658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.045710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.058767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.058798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.069525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.069554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.080915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.080944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.091672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.091701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.102298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.102336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.114975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.115004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.124973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.125002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.136348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.136376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.147097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.147126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.157900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.157928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.819 [2024-11-20 07:10:20.170815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.819 [2024-11-20 07:10:20.170843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.180800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.180828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.191847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.191875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.204847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.204885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.215445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.215473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.226226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.226253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.237455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.237482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.820 [2024-11-20 07:10:20.248203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.820 [2024-11-20 07:10:20.248232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.260793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.260821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.270876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.270903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.281421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.281448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.292045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.292073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.302693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.302720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.313657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.313685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.324456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.324494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.335243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.335270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.345936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.345964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.356656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.356685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.367685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.367713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.381045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.381073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.391374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.391402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.402078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.402106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.414718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.414753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.425287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.078 [2024-11-20 07:10:20.425326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.078 [2024-11-20 07:10:20.435933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.435961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.079 [2024-11-20 07:10:20.446541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.446569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.079 [2024-11-20 07:10:20.457164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.457191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.079 [2024-11-20 07:10:20.467834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.467862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.079 [2024-11-20 07:10:20.478081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.478108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.079 [2024-11-20 07:10:20.488631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.488658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.079 [2024-11-20 07:10:20.499447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.079 [2024-11-20 07:10:20.499475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.510171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.510199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.521052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.521080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.531821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.531849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.544386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.544413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 11697.50 IOPS, 91.39 MiB/s [2024-11-20T06:10:20.771Z] [2024-11-20 07:10:20.554377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.554404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.565041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.565069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.577098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.577126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.585858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.585886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.597727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.597756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.610136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.610165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.620260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.620294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.630870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.630898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.643760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.643788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.653802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.653830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.664228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.664256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.674824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.674852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.685526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.685555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.696191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.696237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.707256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.707283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.717942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.717970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.730463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.730491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.740882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.740910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.751269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.751297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.338 [2024-11-20 07:10:20.762018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.338 [2024-11-20 07:10:20.762046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.774575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.774602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.785025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.785053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.796034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.796063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.808966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.808994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.820864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.820892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.829985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.830014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.841622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.841650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.852563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.852590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.863378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.863407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.876664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.876692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.887676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.887704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.898489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.898517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.909187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.909215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.922136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.922164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.932632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.932660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.596 [2024-11-20 07:10:20.943629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.596 [2024-11-20 07:10:20.943657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:20.954300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:20.954335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:20.965316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:20.965344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:20.978435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:20.978463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:20.988430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:20.988458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:20.999235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:20.999263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:21.012755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:21.012783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.597 [2024-11-20 07:10:21.022923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.597 [2024-11-20 07:10:21.022950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.033781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.033809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.046651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.046680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.056707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.056737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.067649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.067678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.080342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.080382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.090632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.090659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.101716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.101743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.112465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.112493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.122767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.122795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.133317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.133344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.143881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.143908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.154669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.154696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.167361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.167389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.177312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.177339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.188132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.188159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.198899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.198927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.209478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.209506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.220135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.220171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.230956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.230984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.241710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.241738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.252657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.252685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.265645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.265677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.855 [2024-11-20 07:10:21.275889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.855 [2024-11-20 07:10:21.275917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.286830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.286859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.298032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.298061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.309453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.309482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.320149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.320177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.330638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.330666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.341262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.341289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.352018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.352046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.363063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.363092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.375639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.375668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.385937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.385964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.396530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.396558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.407319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.407352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.418180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.418208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.430973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.431001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.440549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.440576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.452588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.452617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.463672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.463699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.474356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.474384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.484843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.484870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.495516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.495544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.506457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.506485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.519476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.519504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.529599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.529627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.114 [2024-11-20 07:10:21.540281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.114 [2024-11-20 07:10:21.540318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.551010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.551037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 11723.33 IOPS, 91.59 MiB/s [2024-11-20T06:10:21.805Z] [2024-11-20 07:10:21.561936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.561964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.574614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.574641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.584538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.584565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.595541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.595569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.608287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.608324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.618859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.618888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.629849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.629878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.642929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.642957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.653154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.653181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.664135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.664169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.676929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.372 [2024-11-20 07:10:21.676957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.372 [2024-11-20 07:10:21.687149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.687176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.698060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.698088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.710298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.710335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.719724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.719751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.730579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.730607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.741711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.741738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.754541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.754569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.765187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.765214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.776023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.776050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.788270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.788297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.373 [2024-11-20 07:10:21.797756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.373 [2024-11-20 07:10:21.797783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.810647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.810675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.820912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.820939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.831572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.831600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.842843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.842871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.853724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.853752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.864378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.864406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.874831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.874866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.885655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.885683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.896207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.896234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.907104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.907132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.917683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.917710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.930198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.930225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.942055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.942082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.950924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.950951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.962446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.962474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.973286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.973322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.983796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.983824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:21.994684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:21.994712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:22.007469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:22.007496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:22.017794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:22.017821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:22.028659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:22.028687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:22.039762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:22.039790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.631 [2024-11-20 07:10:22.050524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.631 [2024-11-20 07:10:22.050552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.063428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.063457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.073803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.073849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.084490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.084527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.095481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.095511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.106097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.106125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.118398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.118430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.128869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.128898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.140053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.140081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.151160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.151187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.162106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.162134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.172515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.172542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.183341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.183370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.195917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.195944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.206043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.206070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.216673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.216700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.227662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.227689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.238106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.238134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.249049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.249077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.259547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.259575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.272122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.272150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.281718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.281746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.292528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.292556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.303172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.303199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.889 [2024-11-20 07:10:22.314001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.889 [2024-11-20 07:10:22.314028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.324499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.324527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.335501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.335528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.346453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.346482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.359427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.359456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.369879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.369907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.380643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.380671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.394322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.394350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.405056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.405084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.416284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.416323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.428639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.428667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.438057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.438085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.449512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.449540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.460254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.460282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.470410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.470438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.480984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.481012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.491858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.491886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.502577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.148 [2024-11-20 07:10:22.502605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.148 [2024-11-20 07:10:22.513170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.513198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.149 [2024-11-20 07:10:22.524092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.524120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.149 [2024-11-20 07:10:22.534880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.534908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.149 [2024-11-20 07:10:22.545603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.545630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.149 [2024-11-20 07:10:22.556018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.556044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.149 11748.75 IOPS, 91.79 MiB/s [2024-11-20T06:10:22.582Z] [2024-11-20 07:10:22.567109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.567136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.149 [2024-11-20 07:10:22.578155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.149 [2024-11-20 07:10:22.578182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.589562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.589590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.600492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.600521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.611380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.611408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.624424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.624452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.634835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.634862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.645405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.645433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.656223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.656252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.666871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.666899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.677456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.677484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.688152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.688180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.698740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.698776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.709265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.709293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.720164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.720192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.732955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.732983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.742905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.742932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.753949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.753977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.764902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.764930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.775693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.775721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.788133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.788160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.797564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.797591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.808689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.808716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.819144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.819172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.407 [2024-11-20 07:10:22.829947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.407 [2024-11-20 07:10:22.829975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.665 [2024-11-20 07:10:22.842410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.665 [2024-11-20 07:10:22.842438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.665 [2024-11-20 07:10:22.852667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.665 [2024-11-20 07:10:22.852695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.665 [2024-11-20 07:10:22.863468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.665 [2024-11-20 07:10:22.863496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.665 [2024-11-20 07:10:22.874131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.665 [2024-11-20 07:10:22.874159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.665 [2024-11-20 07:10:22.887095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.665 [2024-11-20 07:10:22.887122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.665 [2024-11-20 07:10:22.897001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.665 [2024-11-20 07:10:22.897029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.907766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.907801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.918336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.918364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.928899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.928926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.939232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.939259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.949900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.949928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.962438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.962466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.972733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.972761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.983465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.983492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:22.994386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:22.994413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.004891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.004918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.015266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.015295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.027941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.027969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.037810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.037838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.048549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.048577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.061323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.061351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.071742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.071770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.082521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.082549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.666 [2024-11-20 07:10:23.093349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.666 [2024-11-20 07:10:23.093377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.104114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.104143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.117021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.117057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.127506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.127534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.138364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.138391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.151873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.151900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.162078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.162106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.172789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.172817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.183625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.183653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.193706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.193733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.204153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.204196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.214793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.214836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.225639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.225668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.236177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.236204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.246906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.246935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.257656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.257683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.268259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.268287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.282499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.282528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.292454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.292483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.303363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.303391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.314100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.314127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.325158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.924 [2024-11-20 07:10:23.325196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.924 [2024-11-20 07:10:23.337498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.925 [2024-11-20 07:10:23.337526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.925 [2024-11-20 07:10:23.346367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.925 [2024-11-20 07:10:23.346395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.358081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.358109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.369450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.369478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.379885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.379913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.390274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.390309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.400563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.400590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.411478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.411505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.423905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.423932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.434151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.434179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.444774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.444802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.455441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.455469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.466408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.466436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.479385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.479413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.489734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.489762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.500146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.500175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.510524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.510552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.521396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.521424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.532174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.532202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.542943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.542971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.556490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.556518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 11773.20 IOPS, 91.98 MiB/s [2024-11-20T06:10:23.616Z] [2024-11-20 07:10:23.565587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.565615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 00:08:20.183 Latency(us) 00:08:20.183 [2024-11-20T06:10:23.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.183 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:20.183 Nvme1n1 : 5.01 11777.69 92.01 0.00 0.00 10854.90 4903.06 22816.24 00:08:20.183 [2024-11-20T06:10:23.616Z] =================================================================================================================== 00:08:20.183 [2024-11-20T06:10:23.616Z] Total : 11777.69 92.01 0.00 0.00 10854.90 4903.06 22816.24 00:08:20.183 [2024-11-20 07:10:23.572578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.572605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.580613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.580638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.588614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.588637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.596689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.596735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.604710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.604755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.183 [2024-11-20 07:10:23.612727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.183 [2024-11-20 07:10:23.612772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.620751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.620796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.628767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.628812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.636798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.636846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.644812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.644858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.652835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.652881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.660855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.660902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.668874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.668921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.676903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.676950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.684923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.684969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.692942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.692989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.700967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.701012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.708981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.709023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.716953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.716973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.724972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.724991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.732994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.733014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.741019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.741039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.749097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.749141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.757120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.757166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.765101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.765127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.773108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.773129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 [2024-11-20 07:10:23.781127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.442 [2024-11-20 07:10:23.781147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2422431) - No such process 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2422431 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 delay0 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.442 07:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:20.700 [2024-11-20 07:10:23.944453] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:27.262 Initializing NVMe Controllers 00:08:27.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:27.262 Initialization complete. Launching workers. 00:08:27.262 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 127 00:08:27.262 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 414, failed to submit 33 00:08:27.262 success 287, unsuccessful 127, failed 0 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.262 rmmod nvme_tcp 00:08:27.262 rmmod nvme_fabrics 00:08:27.262 rmmod nvme_keyring 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2421082 ']' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2421082 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2421082 ']' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2421082 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2421082 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2421082' 00:08:27.262 killing process with pid 2421082 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2421082 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2421082 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.262 07:10:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.169 00:08:29.169 real 0m27.963s 00:08:29.169 user 0m41.455s 00:08:29.169 sys 0m8.135s 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.169 ************************************ 00:08:29.169 END TEST nvmf_zcopy 00:08:29.169 ************************************ 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.169 ************************************ 00:08:29.169 START TEST nvmf_nmic 00:08:29.169 ************************************ 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.169 * Looking for test storage... 00:08:29.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.169 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.427 --rc genhtml_branch_coverage=1 00:08:29.427 --rc genhtml_function_coverage=1 00:08:29.427 --rc genhtml_legend=1 00:08:29.427 --rc geninfo_all_blocks=1 00:08:29.427 --rc geninfo_unexecuted_blocks=1 00:08:29.427 00:08:29.427 ' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.427 --rc genhtml_branch_coverage=1 00:08:29.427 --rc genhtml_function_coverage=1 00:08:29.427 --rc genhtml_legend=1 00:08:29.427 --rc geninfo_all_blocks=1 00:08:29.427 --rc geninfo_unexecuted_blocks=1 00:08:29.427 00:08:29.427 ' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.427 --rc genhtml_branch_coverage=1 00:08:29.427 --rc genhtml_function_coverage=1 00:08:29.427 --rc genhtml_legend=1 00:08:29.427 --rc geninfo_all_blocks=1 00:08:29.427 --rc geninfo_unexecuted_blocks=1 00:08:29.427 00:08:29.427 ' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.427 --rc genhtml_branch_coverage=1 00:08:29.427 --rc genhtml_function_coverage=1 00:08:29.427 --rc genhtml_legend=1 00:08:29.427 --rc geninfo_all_blocks=1 00:08:29.427 --rc geninfo_unexecuted_blocks=1 00:08:29.427 00:08:29.427 ' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.427 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.428 07:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.958 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:31.959 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:31.959 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:31.959 Found net devices under 0000:09:00.0: cvl_0_0 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:31.959 Found net devices under 0000:09:00.1: cvl_0_1 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:08:31.959 00:08:31.959 --- 10.0.0.2 ping statistics --- 00:08:31.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.959 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:31.959 00:08:31.959 --- 10.0.0.1 ping statistics --- 00:08:31.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.959 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.959 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.960 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.960 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.960 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.960 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.960 07:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2425824 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2425824 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2425824 ']' 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 [2024-11-20 07:10:35.061661] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:08:31.960 [2024-11-20 07:10:35.061772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.960 [2024-11-20 07:10:35.132521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.960 [2024-11-20 07:10:35.188732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.960 [2024-11-20 07:10:35.188786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.960 [2024-11-20 07:10:35.188809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.960 [2024-11-20 07:10:35.188819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.960 [2024-11-20 07:10:35.188829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.960 [2024-11-20 07:10:35.190327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.960 [2024-11-20 07:10:35.190403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.960 [2024-11-20 07:10:35.190406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.960 [2024-11-20 07:10:35.190384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 [2024-11-20 07:10:35.333905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 Malloc0 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.960 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 [2024-11-20 07:10:35.402905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:32.216 test case1: single bdev can't be used in multiple subsystems 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 [2024-11-20 07:10:35.426757] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:32.216 [2024-11-20 07:10:35.426787] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:32.216 [2024-11-20 07:10:35.426810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.216 request: 00:08:32.216 { 00:08:32.216 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:32.216 "namespace": { 00:08:32.216 "bdev_name": "Malloc0", 00:08:32.216 "no_auto_visible": false 00:08:32.216 }, 00:08:32.216 "method": "nvmf_subsystem_add_ns", 00:08:32.216 "req_id": 1 00:08:32.216 } 00:08:32.216 Got JSON-RPC error response 00:08:32.216 response: 00:08:32.216 { 00:08:32.216 "code": -32602, 00:08:32.216 "message": "Invalid parameters" 00:08:32.216 } 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:32.216 Adding namespace failed - expected result. 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:32.216 test case2: host connect to nvmf target in multiple paths 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.216 [2024-11-20 07:10:35.434876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.216 07:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.779 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:33.711 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:33.711 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:08:33.711 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:33.711 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:33.711 07:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:08:35.612 07:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:35.612 [global] 00:08:35.612 thread=1 00:08:35.612 invalidate=1 00:08:35.612 rw=write 00:08:35.612 time_based=1 00:08:35.612 runtime=1 00:08:35.612 ioengine=libaio 00:08:35.612 direct=1 00:08:35.612 bs=4096 00:08:35.612 iodepth=1 00:08:35.612 norandommap=0 00:08:35.612 numjobs=1 00:08:35.612 00:08:35.612 verify_dump=1 00:08:35.612 verify_backlog=512 00:08:35.612 verify_state_save=0 00:08:35.612 do_verify=1 00:08:35.612 verify=crc32c-intel 00:08:35.612 [job0] 00:08:35.612 filename=/dev/nvme0n1 00:08:35.612 Could not set queue depth (nvme0n1) 00:08:35.870 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.870 fio-3.35 00:08:35.870 Starting 1 thread 00:08:36.811 00:08:36.811 job0: (groupid=0, jobs=1): err= 0: pid=2426344: Wed Nov 20 07:10:40 2024 00:08:36.811 read: IOPS=22, BW=89.2KiB/s (91.4kB/s)(92.0KiB/1031msec) 00:08:36.811 slat (nsec): min=8373, max=38668, avg=24888.30, stdev=9755.84 00:08:36.811 clat (usec): min=40910, max=42971, avg=41241.04, stdev=542.11 00:08:36.811 lat (usec): min=40927, max=43009, avg=41265.93, stdev=543.86 00:08:36.811 clat percentiles (usec): 00:08:36.811 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:36.811 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:36.811 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:08:36.811 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:08:36.811 | 99.99th=[42730] 00:08:36.811 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:08:36.811 slat (nsec): min=8540, max=39927, avg=9437.56, stdev=1674.90 00:08:36.811 clat (usec): min=131, max=345, avg=147.49, stdev=12.12 00:08:36.811 lat (usec): min=140, max=385, avg=156.93, stdev=13.16 00:08:36.811 clat percentiles (usec): 00:08:36.811 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:08:36.811 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:08:36.811 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 159], 95.00th=[ 161], 00:08:36.811 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 347], 99.95th=[ 347], 00:08:36.811 | 99.99th=[ 347] 00:08:36.811 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:36.811 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:36.811 lat (usec) : 250=95.51%, 500=0.19% 00:08:36.811 lat (msec) : 50=4.30% 00:08:36.811 cpu : usr=0.58%, sys=0.39%, ctx=535, majf=0, minf=1 00:08:36.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.811 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.811 00:08:36.811 Run status group 0 (all jobs): 00:08:36.811 READ: bw=89.2KiB/s (91.4kB/s), 89.2KiB/s-89.2KiB/s (91.4kB/s-91.4kB/s), io=92.0KiB (94.2kB), run=1031-1031msec 00:08:36.811 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:08:36.811 00:08:36.811 Disk stats (read/write): 00:08:36.811 nvme0n1: ios=69/512, merge=0/0, ticks=926/71, in_queue=997, util=95.49% 00:08:36.811 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.069 rmmod nvme_tcp 00:08:37.069 rmmod nvme_fabrics 00:08:37.069 rmmod nvme_keyring 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2425824 ']' 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2425824 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2425824 ']' 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2425824 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2425824 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2425824' 00:08:37.069 killing process with pid 2425824 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2425824 00:08:37.069 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2425824 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.328 07:10:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.865 00:08:39.865 real 0m10.226s 00:08:39.865 user 0m22.966s 00:08:39.865 sys 0m2.407s 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:39.865 ************************************ 00:08:39.865 END TEST nvmf_nmic 00:08:39.865 ************************************ 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.865 ************************************ 00:08:39.865 START TEST nvmf_fio_target 00:08:39.865 ************************************ 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:39.865 * Looking for test storage... 00:08:39.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.865 --rc genhtml_branch_coverage=1 00:08:39.865 --rc genhtml_function_coverage=1 00:08:39.865 --rc genhtml_legend=1 00:08:39.865 --rc geninfo_all_blocks=1 00:08:39.865 --rc geninfo_unexecuted_blocks=1 00:08:39.865 00:08:39.865 ' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.865 --rc genhtml_branch_coverage=1 00:08:39.865 --rc genhtml_function_coverage=1 00:08:39.865 --rc genhtml_legend=1 00:08:39.865 --rc geninfo_all_blocks=1 00:08:39.865 --rc geninfo_unexecuted_blocks=1 00:08:39.865 00:08:39.865 ' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.865 --rc genhtml_branch_coverage=1 00:08:39.865 --rc genhtml_function_coverage=1 00:08:39.865 --rc genhtml_legend=1 00:08:39.865 --rc geninfo_all_blocks=1 00:08:39.865 --rc geninfo_unexecuted_blocks=1 00:08:39.865 00:08:39.865 ' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.865 --rc genhtml_branch_coverage=1 00:08:39.865 --rc genhtml_function_coverage=1 00:08:39.865 --rc genhtml_legend=1 00:08:39.865 --rc geninfo_all_blocks=1 00:08:39.865 --rc geninfo_unexecuted_blocks=1 00:08:39.865 00:08:39.865 ' 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:39.865 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.866 07:10:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:41.773 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:41.774 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:41.774 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:41.774 Found net devices under 0000:09:00.0: cvl_0_0 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:41.774 Found net devices under 0000:09:00.1: cvl_0_1 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:08:41.774 00:08:41.774 --- 10.0.0.2 ping statistics --- 00:08:41.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.774 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:08:41.774 00:08:41.774 --- 10.0.0.1 ping statistics --- 00:08:41.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.774 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:08:41.774 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.775 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2428439 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2428439 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2428439 ']' 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.033 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.033 [2024-11-20 07:10:45.275012] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:08:42.033 [2024-11-20 07:10:45.275119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.033 [2024-11-20 07:10:45.348672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.033 [2024-11-20 07:10:45.403979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.033 [2024-11-20 07:10:45.404045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.033 [2024-11-20 07:10:45.404074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.033 [2024-11-20 07:10:45.404085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.033 [2024-11-20 07:10:45.404094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.033 [2024-11-20 07:10:45.405782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.033 [2024-11-20 07:10:45.405867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.033 [2024-11-20 07:10:45.405925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.033 [2024-11-20 07:10:45.405930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.290 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:42.546 [2024-11-20 07:10:45.794728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.546 07:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.804 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:42.804 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.062 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:43.062 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.319 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:43.319 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.576 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:43.576 07:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:43.833 07:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.091 07:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:44.091 07:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.657 07:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:44.657 07:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.914 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:44.914 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:45.171 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:45.428 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:45.428 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.685 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:45.685 07:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:45.942 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.198 [2024-11-20 07:10:49.405653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.198 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:46.456 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:46.714 07:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.280 07:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:47.280 07:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:08:47.280 07:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.280 07:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:08:47.280 07:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:08:47.280 07:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:08:49.808 07:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:49.808 [global] 00:08:49.808 thread=1 00:08:49.808 invalidate=1 00:08:49.808 rw=write 00:08:49.808 time_based=1 00:08:49.808 runtime=1 00:08:49.808 ioengine=libaio 00:08:49.808 direct=1 00:08:49.808 bs=4096 00:08:49.808 iodepth=1 00:08:49.808 norandommap=0 00:08:49.808 numjobs=1 00:08:49.809 00:08:49.809 verify_dump=1 00:08:49.809 verify_backlog=512 00:08:49.809 verify_state_save=0 00:08:49.809 do_verify=1 00:08:49.809 verify=crc32c-intel 00:08:49.809 [job0] 00:08:49.809 filename=/dev/nvme0n1 00:08:49.809 [job1] 00:08:49.809 filename=/dev/nvme0n2 00:08:49.809 [job2] 00:08:49.809 filename=/dev/nvme0n3 00:08:49.809 [job3] 00:08:49.809 filename=/dev/nvme0n4 00:08:49.809 Could not set queue depth (nvme0n1) 00:08:49.809 Could not set queue depth (nvme0n2) 00:08:49.809 Could not set queue depth (nvme0n3) 00:08:49.809 Could not set queue depth (nvme0n4) 00:08:49.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.809 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.809 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.809 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.809 fio-3.35 00:08:49.809 Starting 4 threads 00:08:50.741 00:08:50.741 job0: (groupid=0, jobs=1): err= 0: pid=2429514: Wed Nov 20 07:10:54 2024 00:08:50.741 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:50.741 slat (nsec): min=4958, max=70761, avg=14873.91, stdev=9315.19 00:08:50.741 clat (usec): min=172, max=614, avg=253.78, stdev=86.77 00:08:50.741 lat (usec): min=178, max=647, avg=268.65, stdev=93.35 00:08:50.741 clat percentiles (usec): 00:08:50.741 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 192], 00:08:50.741 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 212], 00:08:50.741 | 70.00th=[ 297], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 416], 00:08:50.741 | 99.00th=[ 519], 99.50th=[ 578], 99.90th=[ 586], 99.95th=[ 594], 00:08:50.741 | 99.99th=[ 611] 00:08:50.741 write: IOPS=2061, BW=8248KiB/s (8446kB/s)(8256KiB/1001msec); 0 zone resets 00:08:50.741 slat (nsec): min=5817, max=61591, avg=14200.28, stdev=5398.11 00:08:50.741 clat (usec): min=127, max=560, avg=195.01, stdev=59.58 00:08:50.741 lat (usec): min=136, max=568, avg=209.21, stdev=59.09 00:08:50.741 clat percentiles (usec): 00:08:50.741 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:08:50.741 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 206], 00:08:50.741 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 314], 00:08:50.741 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 523], 99.95th=[ 537], 00:08:50.741 | 99.99th=[ 562] 00:08:50.741 bw ( KiB/s): min= 8192, max= 8192, per=37.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:50.741 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:50.741 lat (usec) : 250=76.97%, 500=22.40%, 750=0.63% 00:08:50.741 cpu : usr=2.80%, sys=6.90%, ctx=4112, majf=0, minf=1 00:08:50.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.741 issued rwts: total=2048,2064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.741 job1: (groupid=0, jobs=1): err= 0: pid=2429515: Wed Nov 20 07:10:54 2024 00:08:50.741 read: IOPS=232, BW=930KiB/s (952kB/s)(932KiB/1002msec) 00:08:50.741 slat (nsec): min=6652, max=34221, avg=10847.44, stdev=6931.36 00:08:50.741 clat (usec): min=188, max=41293, avg=3887.53, stdev=11488.73 00:08:50.741 lat (usec): min=195, max=41313, avg=3898.38, stdev=11493.21 00:08:50.741 clat percentiles (usec): 00:08:50.741 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:08:50.741 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 229], 60.00th=[ 243], 00:08:50.741 | 70.00th=[ 249], 80.00th=[ 363], 90.00th=[ 469], 95.00th=[41157], 00:08:50.741 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:50.741 | 99.99th=[41157] 00:08:50.741 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:08:50.741 slat (nsec): min=7418, max=40179, avg=9983.33, stdev=3616.98 00:08:50.741 clat (usec): min=138, max=246, avg=167.62, stdev=19.43 00:08:50.741 lat (usec): min=147, max=255, avg=177.60, stdev=19.74 00:08:50.741 clat percentiles (usec): 00:08:50.741 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:08:50.741 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:08:50.741 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 215], 00:08:50.741 | 99.00th=[ 237], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 247], 00:08:50.741 | 99.99th=[ 247] 00:08:50.741 bw ( KiB/s): min= 4096, max= 4096, per=18.51%, avg=4096.00, stdev= 0.00, samples=1 00:08:50.741 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:50.741 lat (usec) : 250=90.87%, 500=6.17% 00:08:50.741 lat (msec) : 20=0.13%, 50=2.82% 00:08:50.741 cpu : usr=0.40%, sys=1.10%, ctx=745, majf=0, minf=1 00:08:50.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.741 issued rwts: total=233,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.741 job2: (groupid=0, jobs=1): err= 0: pid=2429516: Wed Nov 20 07:10:54 2024 00:08:50.741 read: IOPS=1806, BW=7225KiB/s (7398kB/s)(7232KiB/1001msec) 00:08:50.741 slat (nsec): min=7162, max=58498, avg=16058.57, stdev=5588.58 00:08:50.741 clat (usec): min=202, max=571, avg=285.71, stdev=69.23 00:08:50.741 lat (usec): min=210, max=590, avg=301.77, stdev=71.66 00:08:50.741 clat percentiles (usec): 00:08:50.741 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:08:50.741 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:08:50.741 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 408], 95.00th=[ 469], 00:08:50.741 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 570], 00:08:50.741 | 99.99th=[ 570] 00:08:50.741 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:50.741 slat (nsec): min=7752, max=62981, avg=17397.71, stdev=6395.72 00:08:50.741 clat (usec): min=141, max=1215, avg=195.48, stdev=31.57 00:08:50.741 lat (usec): min=150, max=1225, avg=212.88, stdev=32.89 00:08:50.741 clat percentiles (usec): 00:08:50.741 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 180], 00:08:50.741 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:08:50.741 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 233], 00:08:50.741 | 99.00th=[ 260], 99.50th=[ 285], 99.90th=[ 297], 99.95th=[ 359], 00:08:50.741 | 99.99th=[ 1221] 00:08:50.741 bw ( KiB/s): min= 8192, max= 8192, per=37.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:50.741 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:50.741 lat (usec) : 250=62.99%, 500=35.50%, 750=1.48% 00:08:50.741 lat (msec) : 2=0.03% 00:08:50.741 cpu : usr=5.30%, sys=8.30%, ctx=3856, majf=0, minf=1 00:08:50.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.742 issued rwts: total=1808,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.742 job3: (groupid=0, jobs=1): err= 0: pid=2429517: Wed Nov 20 07:10:54 2024 00:08:50.742 read: IOPS=738, BW=2954KiB/s (3025kB/s)(3016KiB/1021msec) 00:08:50.742 slat (nsec): min=5237, max=71876, avg=22936.52, stdev=11375.81 00:08:50.742 clat (usec): min=207, max=41107, avg=991.62, stdev=5088.23 00:08:50.742 lat (usec): min=220, max=41168, avg=1014.55, stdev=5089.36 00:08:50.742 clat percentiles (usec): 00:08:50.742 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 289], 20.00th=[ 306], 00:08:50.742 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 363], 00:08:50.742 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 420], 00:08:50.742 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:50.742 | 99.99th=[41157] 00:08:50.742 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:08:50.742 slat (nsec): min=6008, max=60581, avg=13149.63, stdev=5872.49 00:08:50.742 clat (usec): min=161, max=382, avg=228.08, stdev=33.94 00:08:50.742 lat (usec): min=167, max=399, avg=241.23, stdev=33.57 00:08:50.742 clat percentiles (usec): 00:08:50.742 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 206], 00:08:50.742 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:08:50.742 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:08:50.742 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 379], 99.95th=[ 383], 00:08:50.742 | 99.99th=[ 383] 00:08:50.742 bw ( KiB/s): min= 8192, max= 8192, per=37.02%, avg=8192.00, stdev= 0.00, samples=1 00:08:50.742 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:50.742 lat (usec) : 250=49.04%, 500=50.28% 00:08:50.742 lat (msec) : 50=0.67% 00:08:50.742 cpu : usr=1.27%, sys=3.43%, ctx=1778, majf=0, minf=1 00:08:50.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.742 issued rwts: total=754,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.742 00:08:50.742 Run status group 0 (all jobs): 00:08:50.742 READ: bw=18.5MiB/s (19.4MB/s), 930KiB/s-8184KiB/s (952kB/s-8380kB/s), io=18.9MiB (19.8MB), run=1001-1021msec 00:08:50.742 WRITE: bw=21.6MiB/s (22.7MB/s), 2044KiB/s-8248KiB/s (2093kB/s-8446kB/s), io=22.1MiB (23.1MB), run=1001-1021msec 00:08:50.742 00:08:50.742 Disk stats (read/write): 00:08:50.742 nvme0n1: ios=1586/1817, merge=0/0, ticks=423/350, in_queue=773, util=86.27% 00:08:50.742 nvme0n2: ios=278/512, merge=0/0, ticks=788/80, in_queue=868, util=90.33% 00:08:50.742 nvme0n3: ios=1593/1749, merge=0/0, ticks=493/317, in_queue=810, util=94.55% 00:08:50.742 nvme0n4: ios=806/1024, merge=0/0, ticks=606/232, in_queue=838, util=95.46% 00:08:50.742 07:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:50.999 [global] 00:08:51.000 thread=1 00:08:51.000 invalidate=1 00:08:51.000 rw=randwrite 00:08:51.000 time_based=1 00:08:51.000 runtime=1 00:08:51.000 ioengine=libaio 00:08:51.000 direct=1 00:08:51.000 bs=4096 00:08:51.000 iodepth=1 00:08:51.000 norandommap=0 00:08:51.000 numjobs=1 00:08:51.000 00:08:51.000 verify_dump=1 00:08:51.000 verify_backlog=512 00:08:51.000 verify_state_save=0 00:08:51.000 do_verify=1 00:08:51.000 verify=crc32c-intel 00:08:51.000 [job0] 00:08:51.000 filename=/dev/nvme0n1 00:08:51.000 [job1] 00:08:51.000 filename=/dev/nvme0n2 00:08:51.000 [job2] 00:08:51.000 filename=/dev/nvme0n3 00:08:51.000 [job3] 00:08:51.000 filename=/dev/nvme0n4 00:08:51.000 Could not set queue depth (nvme0n1) 00:08:51.000 Could not set queue depth (nvme0n2) 00:08:51.000 Could not set queue depth (nvme0n3) 00:08:51.000 Could not set queue depth (nvme0n4) 00:08:51.000 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.000 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.000 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.000 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.000 fio-3.35 00:08:51.000 Starting 4 threads 00:08:52.380 00:08:52.380 job0: (groupid=0, jobs=1): err= 0: pid=2429859: Wed Nov 20 07:10:55 2024 00:08:52.380 read: IOPS=1523, BW=6093KiB/s (6239kB/s)(6148KiB/1009msec) 00:08:52.380 slat (nsec): min=6448, max=70105, avg=15570.36, stdev=6578.75 00:08:52.380 clat (usec): min=181, max=41359, avg=346.76, stdev=1051.03 00:08:52.380 lat (usec): min=190, max=41367, avg=362.33, stdev=1051.10 00:08:52.380 clat percentiles (usec): 00:08:52.380 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 241], 00:08:52.380 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 289], 60.00th=[ 314], 00:08:52.380 | 70.00th=[ 351], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 506], 00:08:52.380 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 791], 99.95th=[41157], 00:08:52.380 | 99.99th=[41157] 00:08:52.380 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:08:52.380 slat (nsec): min=7175, max=81225, avg=16120.73, stdev=6628.55 00:08:52.380 clat (usec): min=134, max=1094, avg=195.62, stdev=41.26 00:08:52.380 lat (usec): min=143, max=1116, avg=211.74, stdev=42.91 00:08:52.380 clat percentiles (usec): 00:08:52.380 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 172], 00:08:52.380 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:08:52.380 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 241], 95.00th=[ 273], 00:08:52.380 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 668], 99.95th=[ 685], 00:08:52.381 | 99.99th=[ 1090] 00:08:52.381 bw ( KiB/s): min= 8192, max= 8192, per=30.47%, avg=8192.00, stdev= 0.00, samples=2 00:08:52.381 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:08:52.381 lat (usec) : 250=64.97%, 500=32.64%, 750=2.32%, 1000=0.03% 00:08:52.381 lat (msec) : 2=0.03%, 50=0.03% 00:08:52.381 cpu : usr=4.27%, sys=7.84%, ctx=3588, majf=0, minf=2 00:08:52.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.381 job1: (groupid=0, jobs=1): err= 0: pid=2429860: Wed Nov 20 07:10:55 2024 00:08:52.381 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:08:52.381 slat (nsec): min=9227, max=20007, avg=12375.55, stdev=2380.49 00:08:52.381 clat (usec): min=40632, max=41095, avg=40969.31, stdev=93.17 00:08:52.381 lat (usec): min=40642, max=41106, avg=40981.69, stdev=93.28 00:08:52.381 clat percentiles (usec): 00:08:52.381 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:52.381 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:52.381 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:52.381 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:52.381 | 99.99th=[41157] 00:08:52.381 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:08:52.381 slat (nsec): min=9288, max=62610, avg=11082.96, stdev=3301.69 00:08:52.381 clat (usec): min=136, max=342, avg=224.87, stdev=26.11 00:08:52.381 lat (usec): min=146, max=405, avg=235.95, stdev=26.60 00:08:52.381 clat percentiles (usec): 00:08:52.381 | 1.00th=[ 143], 5.00th=[ 161], 10.00th=[ 206], 20.00th=[ 215], 00:08:52.381 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:08:52.381 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 265], 00:08:52.381 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 343], 99.95th=[ 343], 00:08:52.381 | 99.99th=[ 343] 00:08:52.381 bw ( KiB/s): min= 4096, max= 4096, per=15.24%, avg=4096.00, stdev= 0.00, samples=1 00:08:52.381 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:52.381 lat (usec) : 250=84.08%, 500=11.80% 00:08:52.381 lat (msec) : 50=4.12% 00:08:52.381 cpu : usr=0.49%, sys=0.59%, ctx=535, majf=0, minf=1 00:08:52.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.381 job2: (groupid=0, jobs=1): err= 0: pid=2429862: Wed Nov 20 07:10:55 2024 00:08:52.381 read: IOPS=1655, BW=6621KiB/s (6780kB/s)(6628KiB/1001msec) 00:08:52.381 slat (nsec): min=5128, max=81155, avg=18734.66, stdev=11912.14 00:08:52.381 clat (usec): min=182, max=41828, avg=321.00, stdev=1022.60 00:08:52.381 lat (usec): min=187, max=41843, avg=339.74, stdev=1023.01 00:08:52.381 clat percentiles (usec): 00:08:52.381 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:08:52.381 | 30.00th=[ 237], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 310], 00:08:52.381 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 375], 95.00th=[ 416], 00:08:52.381 | 99.00th=[ 486], 99.50th=[ 490], 99.90th=[ 515], 99.95th=[41681], 00:08:52.381 | 99.99th=[41681] 00:08:52.381 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:52.381 slat (nsec): min=5724, max=62072, avg=12467.46, stdev=5537.78 00:08:52.381 clat (usec): min=139, max=1081, avg=193.09, stdev=38.26 00:08:52.381 lat (usec): min=146, max=1089, avg=205.56, stdev=40.22 00:08:52.381 clat percentiles (usec): 00:08:52.381 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 172], 00:08:52.381 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:08:52.381 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 233], 95.00th=[ 251], 00:08:52.381 | 99.00th=[ 330], 99.50th=[ 375], 99.90th=[ 416], 99.95th=[ 420], 00:08:52.381 | 99.99th=[ 1074] 00:08:52.381 bw ( KiB/s): min= 8192, max= 8192, per=30.47%, avg=8192.00, stdev= 0.00, samples=1 00:08:52.381 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:52.381 lat (usec) : 250=69.23%, 500=30.61%, 750=0.11% 00:08:52.381 lat (msec) : 2=0.03%, 50=0.03% 00:08:52.381 cpu : usr=2.60%, sys=6.40%, ctx=3706, majf=0, minf=2 00:08:52.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 issued rwts: total=1657,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.381 job3: (groupid=0, jobs=1): err= 0: pid=2429863: Wed Nov 20 07:10:55 2024 00:08:52.381 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:52.381 slat (nsec): min=5241, max=46136, avg=11693.67, stdev=4338.34 00:08:52.381 clat (usec): min=187, max=1011, avg=247.59, stdev=60.43 00:08:52.381 lat (usec): min=194, max=1020, avg=259.28, stdev=62.12 00:08:52.381 clat percentiles (usec): 00:08:52.381 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:08:52.381 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 233], 00:08:52.381 | 70.00th=[ 243], 80.00th=[ 273], 90.00th=[ 330], 95.00th=[ 396], 00:08:52.381 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 519], 99.95th=[ 519], 00:08:52.381 | 99.99th=[ 1012] 00:08:52.381 write: IOPS=2278, BW=9115KiB/s (9334kB/s)(9124KiB/1001msec); 0 zone resets 00:08:52.381 slat (nsec): min=5907, max=41468, avg=15155.06, stdev=5520.53 00:08:52.381 clat (usec): min=142, max=279, avg=183.34, stdev=22.73 00:08:52.381 lat (usec): min=150, max=287, avg=198.50, stdev=24.83 00:08:52.381 clat percentiles (usec): 00:08:52.381 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:08:52.381 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:08:52.381 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 227], 00:08:52.381 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 277], 00:08:52.381 | 99.99th=[ 281] 00:08:52.381 bw ( KiB/s): min=10480, max=10480, per=38.98%, avg=10480.00, stdev= 0.00, samples=1 00:08:52.381 iops : min= 2620, max= 2620, avg=2620.00, stdev= 0.00, samples=1 00:08:52.381 lat (usec) : 250=87.34%, 500=12.57%, 750=0.07% 00:08:52.381 lat (msec) : 2=0.02% 00:08:52.381 cpu : usr=2.80%, sys=7.50%, ctx=4332, majf=0, minf=1 00:08:52.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.381 issued rwts: total=2048,2281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.381 00:08:52.381 Run status group 0 (all jobs): 00:08:52.381 READ: bw=20.1MiB/s (21.0MB/s), 85.9KiB/s-8184KiB/s (87.9kB/s-8380kB/s), io=20.6MiB (21.6MB), run=1001-1025msec 00:08:52.381 WRITE: bw=26.3MiB/s (27.5MB/s), 1998KiB/s-9115KiB/s (2046kB/s-9334kB/s), io=26.9MiB (28.2MB), run=1001-1025msec 00:08:52.381 00:08:52.381 Disk stats (read/write): 00:08:52.381 nvme0n1: ios=1530/1536, merge=0/0, ticks=470/300, in_queue=770, util=86.77% 00:08:52.381 nvme0n2: ios=30/512, merge=0/0, ticks=705/115, in_queue=820, util=86.79% 00:08:52.381 nvme0n3: ios=1464/1536, merge=0/0, ticks=463/303, in_queue=766, util=88.95% 00:08:52.381 nvme0n4: ios=1722/2048, merge=0/0, ticks=1326/361, in_queue=1687, util=98.21% 00:08:52.381 07:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:52.381 [global] 00:08:52.381 thread=1 00:08:52.381 invalidate=1 00:08:52.381 rw=write 00:08:52.381 time_based=1 00:08:52.381 runtime=1 00:08:52.381 ioengine=libaio 00:08:52.381 direct=1 00:08:52.381 bs=4096 00:08:52.381 iodepth=128 00:08:52.381 norandommap=0 00:08:52.381 numjobs=1 00:08:52.381 00:08:52.381 verify_dump=1 00:08:52.381 verify_backlog=512 00:08:52.381 verify_state_save=0 00:08:52.381 do_verify=1 00:08:52.381 verify=crc32c-intel 00:08:52.381 [job0] 00:08:52.381 filename=/dev/nvme0n1 00:08:52.381 [job1] 00:08:52.381 filename=/dev/nvme0n2 00:08:52.381 [job2] 00:08:52.381 filename=/dev/nvme0n3 00:08:52.381 [job3] 00:08:52.381 filename=/dev/nvme0n4 00:08:52.381 Could not set queue depth (nvme0n1) 00:08:52.381 Could not set queue depth (nvme0n2) 00:08:52.381 Could not set queue depth (nvme0n3) 00:08:52.381 Could not set queue depth (nvme0n4) 00:08:52.640 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.640 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.640 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.640 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.640 fio-3.35 00:08:52.640 Starting 4 threads 00:08:54.015 00:08:54.015 job0: (groupid=0, jobs=1): err= 0: pid=2430093: Wed Nov 20 07:10:57 2024 00:08:54.015 read: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:08:54.015 slat (usec): min=3, max=10348, avg=96.96, stdev=557.02 00:08:54.015 clat (usec): min=2255, max=23133, avg=12272.53, stdev=2363.87 00:08:54.015 lat (usec): min=4508, max=23148, avg=12369.49, stdev=2402.70 00:08:54.015 clat percentiles (usec): 00:08:54.015 | 1.00th=[ 5080], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10683], 00:08:54.015 | 30.00th=[11469], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:08:54.015 | 70.00th=[12518], 80.00th=[13829], 90.00th=[15139], 95.00th=[16450], 00:08:54.015 | 99.00th=[20055], 99.50th=[21365], 99.90th=[22938], 99.95th=[23200], 00:08:54.015 | 99.99th=[23200] 00:08:54.015 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:08:54.015 slat (usec): min=4, max=9694, avg=90.38, stdev=432.18 00:08:54.015 clat (usec): min=4329, max=27018, avg=12774.21, stdev=3367.50 00:08:54.015 lat (usec): min=4345, max=27028, avg=12864.60, stdev=3404.91 00:08:54.015 clat percentiles (usec): 00:08:54.015 | 1.00th=[ 5866], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10945], 00:08:54.015 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:08:54.015 | 70.00th=[12518], 80.00th=[13304], 90.00th=[16188], 95.00th=[21627], 00:08:54.015 | 99.00th=[25560], 99.50th=[26608], 99.90th=[27132], 99.95th=[27132], 00:08:54.015 | 99.99th=[27132] 00:08:54.015 bw ( KiB/s): min=20480, max=20521, per=28.17%, avg=20500.50, stdev=28.99, samples=2 00:08:54.015 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:08:54.015 lat (msec) : 4=0.01%, 10=9.87%, 20=86.45%, 50=3.67% 00:08:54.015 cpu : usr=6.29%, sys=9.88%, ctx=596, majf=0, minf=1 00:08:54.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:54.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.015 issued rwts: total=5030,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.015 job1: (groupid=0, jobs=1): err= 0: pid=2430094: Wed Nov 20 07:10:57 2024 00:08:54.015 read: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1007msec) 00:08:54.015 slat (usec): min=2, max=11164, avg=95.19, stdev=645.22 00:08:54.015 clat (usec): min=4230, max=23348, avg=12112.32, stdev=2951.76 00:08:54.015 lat (usec): min=4238, max=23478, avg=12207.50, stdev=2992.95 00:08:54.015 clat percentiles (usec): 00:08:54.015 | 1.00th=[ 6063], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10421], 00:08:54.015 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11731], 00:08:54.015 | 70.00th=[11994], 80.00th=[13042], 90.00th=[16581], 95.00th=[19006], 00:08:54.015 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23200], 99.95th=[23462], 00:08:54.015 | 99.99th=[23462] 00:08:54.015 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:08:54.015 slat (usec): min=3, max=8318, avg=77.25, stdev=391.77 00:08:54.015 clat (usec): min=2521, max=23336, avg=11013.60, stdev=2377.15 00:08:54.015 lat (usec): min=2527, max=23347, avg=11090.85, stdev=2410.50 00:08:54.015 clat percentiles (usec): 00:08:54.015 | 1.00th=[ 3589], 5.00th=[ 6194], 10.00th=[ 7832], 20.00th=[ 9896], 00:08:54.015 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11731], 00:08:54.016 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[14353], 00:08:54.016 | 99.00th=[16319], 99.50th=[16581], 99.90th=[22676], 99.95th=[22938], 00:08:54.016 | 99.99th=[23462] 00:08:54.016 bw ( KiB/s): min=20496, max=24560, per=30.95%, avg=22528.00, stdev=2873.68, samples=2 00:08:54.016 iops : min= 5124, max= 6140, avg=5632.00, stdev=718.42, samples=2 00:08:54.016 lat (msec) : 4=0.56%, 10=18.00%, 20=79.85%, 50=1.59% 00:08:54.016 cpu : usr=5.37%, sys=10.24%, ctx=588, majf=0, minf=1 00:08:54.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:54.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.016 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.016 job2: (groupid=0, jobs=1): err= 0: pid=2430097: Wed Nov 20 07:10:57 2024 00:08:54.016 read: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1004msec) 00:08:54.016 slat (usec): min=2, max=12045, avg=136.49, stdev=785.85 00:08:54.016 clat (usec): min=3227, max=41594, avg=16522.36, stdev=5051.61 00:08:54.016 lat (usec): min=3535, max=41599, avg=16658.84, stdev=5090.23 00:08:54.016 clat percentiles (usec): 00:08:54.016 | 1.00th=[ 6718], 5.00th=[11600], 10.00th=[12518], 20.00th=[13829], 00:08:54.016 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15533], 00:08:54.016 | 70.00th=[17695], 80.00th=[18220], 90.00th=[23200], 95.00th=[23987], 00:08:54.016 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41681], 99.95th=[41681], 00:08:54.016 | 99.99th=[41681] 00:08:54.016 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:08:54.016 slat (usec): min=3, max=15999, avg=142.92, stdev=800.12 00:08:54.016 clat (usec): min=3964, max=58082, avg=20012.07, stdev=11506.20 00:08:54.016 lat (usec): min=3976, max=60505, avg=20154.99, stdev=11578.55 00:08:54.016 clat percentiles (usec): 00:08:54.016 | 1.00th=[ 6783], 5.00th=[11863], 10.00th=[12387], 20.00th=[13829], 00:08:54.016 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:08:54.016 | 70.00th=[17957], 80.00th=[26870], 90.00th=[41681], 95.00th=[47973], 00:08:54.016 | 99.00th=[55313], 99.50th=[56886], 99.90th=[57934], 99.95th=[57934], 00:08:54.016 | 99.99th=[57934] 00:08:54.016 bw ( KiB/s): min=12288, max=16384, per=19.70%, avg=14336.00, stdev=2896.31, samples=2 00:08:54.016 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:08:54.016 lat (msec) : 4=0.53%, 10=1.35%, 20=77.96%, 50=18.36%, 100=1.79% 00:08:54.016 cpu : usr=3.39%, sys=4.59%, ctx=315, majf=0, minf=1 00:08:54.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:54.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.016 issued rwts: total=3359,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.016 job3: (groupid=0, jobs=1): err= 0: pid=2430098: Wed Nov 20 07:10:57 2024 00:08:54.016 read: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:08:54.016 slat (usec): min=3, max=17208, avg=135.71, stdev=913.21 00:08:54.016 clat (usec): min=4601, max=66164, avg=16767.14, stdev=8736.15 00:08:54.016 lat (usec): min=4607, max=66200, avg=16902.84, stdev=8808.82 00:08:54.016 clat percentiles (usec): 00:08:54.016 | 1.00th=[ 6915], 5.00th=[12125], 10.00th=[12387], 20.00th=[12911], 00:08:54.016 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13566], 60.00th=[15139], 00:08:54.016 | 70.00th=[15664], 80.00th=[18744], 90.00th=[21890], 95.00th=[31065], 00:08:54.016 | 99.00th=[62653], 99.50th=[62653], 99.90th=[64750], 99.95th=[65799], 00:08:54.016 | 99.99th=[66323] 00:08:54.016 write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets 00:08:54.016 slat (usec): min=4, max=30056, avg=114.12, stdev=867.35 00:08:54.016 clat (usec): min=298, max=80605, avg=16672.87, stdev=9105.60 00:08:54.016 lat (usec): min=315, max=80611, avg=16786.99, stdev=9176.92 00:08:54.016 clat percentiles (usec): 00:08:54.016 | 1.00th=[ 2802], 5.00th=[ 7832], 10.00th=[10421], 20.00th=[12518], 00:08:54.016 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14222], 60.00th=[14484], 00:08:54.016 | 70.00th=[16188], 80.00th=[17433], 90.00th=[26346], 95.00th=[36439], 00:08:54.016 | 99.00th=[50070], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:08:54.016 | 99.99th=[80217] 00:08:54.016 bw ( KiB/s): min=12288, max=19504, per=21.84%, avg=15896.00, stdev=5102.48, samples=2 00:08:54.016 iops : min= 3072, max= 4876, avg=3974.00, stdev=1275.62, samples=2 00:08:54.016 lat (usec) : 500=0.03% 00:08:54.016 lat (msec) : 2=0.38%, 4=0.36%, 10=5.00%, 20=76.98%, 50=15.21% 00:08:54.016 lat (msec) : 100=2.04% 00:08:54.016 cpu : usr=4.84%, sys=5.14%, ctx=392, majf=0, minf=1 00:08:54.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:54.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.016 issued rwts: total=3590,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.016 00:08:54.016 Run status group 0 (all jobs): 00:08:54.016 READ: bw=66.9MiB/s (70.1MB/s), 13.1MiB/s-20.8MiB/s (13.7MB/s-21.8MB/s), io=67.7MiB (71.0MB), run=1003-1013msec 00:08:54.016 WRITE: bw=71.1MiB/s (74.5MB/s), 13.9MiB/s-21.8MiB/s (14.6MB/s-22.9MB/s), io=72.0MiB (75.5MB), run=1003-1013msec 00:08:54.016 00:08:54.016 Disk stats (read/write): 00:08:54.016 nvme0n1: ios=4117/4487, merge=0/0, ticks=30714/32743, in_queue=63457, util=96.69% 00:08:54.016 nvme0n2: ios=4647/4815, merge=0/0, ticks=44954/42110, in_queue=87064, util=98.17% 00:08:54.016 nvme0n3: ios=2613/3040, merge=0/0, ticks=25069/41649, in_queue=66718, util=96.88% 00:08:54.016 nvme0n4: ios=3072/3159, merge=0/0, ticks=32451/31580, in_queue=64031, util=89.61% 00:08:54.016 07:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:54.016 [global] 00:08:54.016 thread=1 00:08:54.016 invalidate=1 00:08:54.016 rw=randwrite 00:08:54.016 time_based=1 00:08:54.016 runtime=1 00:08:54.016 ioengine=libaio 00:08:54.016 direct=1 00:08:54.016 bs=4096 00:08:54.016 iodepth=128 00:08:54.016 norandommap=0 00:08:54.016 numjobs=1 00:08:54.016 00:08:54.016 verify_dump=1 00:08:54.016 verify_backlog=512 00:08:54.016 verify_state_save=0 00:08:54.016 do_verify=1 00:08:54.016 verify=crc32c-intel 00:08:54.016 [job0] 00:08:54.016 filename=/dev/nvme0n1 00:08:54.016 [job1] 00:08:54.016 filename=/dev/nvme0n2 00:08:54.016 [job2] 00:08:54.016 filename=/dev/nvme0n3 00:08:54.016 [job3] 00:08:54.016 filename=/dev/nvme0n4 00:08:54.016 Could not set queue depth (nvme0n1) 00:08:54.016 Could not set queue depth (nvme0n2) 00:08:54.016 Could not set queue depth (nvme0n3) 00:08:54.016 Could not set queue depth (nvme0n4) 00:08:54.016 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.016 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.016 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.016 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.016 fio-3.35 00:08:54.016 Starting 4 threads 00:08:55.429 00:08:55.429 job0: (groupid=0, jobs=1): err= 0: pid=2430328: Wed Nov 20 07:10:58 2024 00:08:55.429 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:08:55.429 slat (usec): min=2, max=5612, avg=90.23, stdev=488.21 00:08:55.429 clat (usec): min=2906, max=61952, avg=11961.27, stdev=3026.34 00:08:55.429 lat (usec): min=2918, max=61958, avg=12051.50, stdev=3031.43 00:08:55.429 clat percentiles (usec): 00:08:55.429 | 1.00th=[ 5866], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10814], 00:08:55.429 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12387], 60.00th=[12387], 00:08:55.429 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[14484], 00:08:55.429 | 99.00th=[16450], 99.50th=[19006], 99.90th=[57410], 99.95th=[57410], 00:08:55.429 | 99.99th=[62129] 00:08:55.429 write: IOPS=5120, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1001msec); 0 zone resets 00:08:55.429 slat (usec): min=2, max=15193, avg=95.39, stdev=600.82 00:08:55.429 clat (usec): min=669, max=57581, avg=12732.54, stdev=5331.95 00:08:55.429 lat (usec): min=2738, max=57590, avg=12827.93, stdev=5362.56 00:08:55.429 clat percentiles (usec): 00:08:55.430 | 1.00th=[ 6456], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10945], 00:08:55.430 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:08:55.430 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13042], 95.00th=[22152], 00:08:55.430 | 99.00th=[35390], 99.50th=[50594], 99.90th=[57410], 99.95th=[57410], 00:08:55.430 | 99.99th=[57410] 00:08:55.430 bw ( KiB/s): min=20480, max=20480, per=28.58%, avg=20480.00, stdev= 0.00, samples=1 00:08:55.430 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:55.430 lat (usec) : 750=0.01% 00:08:55.430 lat (msec) : 4=0.31%, 10=10.18%, 20=86.51%, 50=2.58%, 100=0.41% 00:08:55.430 cpu : usr=5.40%, sys=9.90%, ctx=447, majf=0, minf=1 00:08:55.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:55.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.430 issued rwts: total=5120,5126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.430 job1: (groupid=0, jobs=1): err= 0: pid=2430329: Wed Nov 20 07:10:58 2024 00:08:55.430 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:08:55.430 slat (usec): min=2, max=14431, avg=118.10, stdev=751.13 00:08:55.430 clat (usec): min=4702, max=62390, avg=15351.72, stdev=8895.15 00:08:55.430 lat (usec): min=4708, max=70818, avg=15469.81, stdev=8951.80 00:08:55.430 clat percentiles (usec): 00:08:55.430 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[10552], 20.00th=[11731], 00:08:55.430 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:08:55.430 | 70.00th=[14091], 80.00th=[14746], 90.00th=[19006], 95.00th=[34866], 00:08:55.430 | 99.00th=[56886], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:08:55.430 | 99.99th=[62653] 00:08:55.430 write: IOPS=4512, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1003msec); 0 zone resets 00:08:55.430 slat (usec): min=3, max=26637, avg=104.49, stdev=631.72 00:08:55.430 clat (usec): min=2107, max=62256, avg=14218.89, stdev=6657.04 00:08:55.430 lat (usec): min=2706, max=62263, avg=14323.38, stdev=6671.56 00:08:55.430 clat percentiles (usec): 00:08:55.430 | 1.00th=[ 5145], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10814], 00:08:55.430 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[13173], 00:08:55.430 | 70.00th=[13698], 80.00th=[15533], 90.00th=[22414], 95.00th=[23725], 00:08:55.430 | 99.00th=[39584], 99.50th=[58983], 99.90th=[62129], 99.95th=[62129], 00:08:55.430 | 99.99th=[62129] 00:08:55.430 bw ( KiB/s): min=16384, max=18808, per=24.56%, avg=17596.00, stdev=1714.03, samples=2 00:08:55.430 iops : min= 4096, max= 4702, avg=4399.00, stdev=428.51, samples=2 00:08:55.430 lat (msec) : 4=0.20%, 10=7.49%, 20=81.77%, 50=8.95%, 100=1.59% 00:08:55.430 cpu : usr=4.19%, sys=7.68%, ctx=501, majf=0, minf=2 00:08:55.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:55.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.430 issued rwts: total=4096,4526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.430 job2: (groupid=0, jobs=1): err= 0: pid=2430330: Wed Nov 20 07:10:58 2024 00:08:55.430 read: IOPS=3805, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec) 00:08:55.430 slat (usec): min=2, max=25587, avg=151.69, stdev=1124.15 00:08:55.430 clat (usec): min=3096, max=99320, avg=18263.48, stdev=13506.46 00:08:55.430 lat (usec): min=4264, max=99330, avg=18415.17, stdev=13607.02 00:08:55.430 clat percentiles (usec): 00:08:55.430 | 1.00th=[ 4752], 5.00th=[10683], 10.00th=[11600], 20.00th=[12649], 00:08:55.430 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[14353], 00:08:55.430 | 70.00th=[15139], 80.00th=[19530], 90.00th=[28181], 95.00th=[51643], 00:08:55.430 | 99.00th=[80217], 99.50th=[84411], 99.90th=[99091], 99.95th=[99091], 00:08:55.430 | 99.99th=[99091] 00:08:55.430 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:08:55.430 slat (usec): min=3, max=11170, avg=94.03, stdev=474.72 00:08:55.430 clat (msec): min=2, max=123, avg=13.95, stdev= 8.79 00:08:55.430 lat (msec): min=2, max=123, avg=14.04, stdev= 8.83 00:08:55.430 clat percentiles (msec): 00:08:55.430 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:08:55.430 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:08:55.430 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 15], 95.00th=[ 19], 00:08:55.430 | 99.00th=[ 34], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 120], 00:08:55.430 | 99.99th=[ 124] 00:08:55.430 bw ( KiB/s): min=15664, max=17104, per=22.86%, avg=16384.00, stdev=1018.23, samples=2 00:08:55.430 iops : min= 3916, max= 4276, avg=4096.00, stdev=254.56, samples=2 00:08:55.430 lat (msec) : 4=0.44%, 10=7.21%, 20=80.89%, 50=8.01%, 100=3.14% 00:08:55.430 lat (msec) : 250=0.30% 00:08:55.430 cpu : usr=4.08%, sys=7.56%, ctx=445, majf=0, minf=1 00:08:55.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:55.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.430 issued rwts: total=3828,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.430 job3: (groupid=0, jobs=1): err= 0: pid=2430331: Wed Nov 20 07:10:58 2024 00:08:55.430 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:08:55.430 slat (usec): min=2, max=10550, avg=108.48, stdev=642.33 00:08:55.430 clat (usec): min=1181, max=75118, avg=14084.15, stdev=4648.01 00:08:55.430 lat (usec): min=1186, max=75153, avg=14192.63, stdev=4678.10 00:08:55.430 clat percentiles (usec): 00:08:55.430 | 1.00th=[ 4228], 5.00th=[10421], 10.00th=[11076], 20.00th=[12518], 00:08:55.430 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:08:55.430 | 70.00th=[14746], 80.00th=[15139], 90.00th=[16450], 95.00th=[17957], 00:08:55.430 | 99.00th=[19006], 99.50th=[22414], 99.90th=[73925], 99.95th=[73925], 00:08:55.430 | 99.99th=[74974] 00:08:55.430 write: IOPS=4269, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1001msec); 0 zone resets 00:08:55.430 slat (usec): min=3, max=20515, avg=114.88, stdev=738.09 00:08:55.430 clat (usec): min=553, max=98814, avg=16229.65, stdev=12821.80 00:08:55.430 lat (usec): min=558, max=98829, avg=16344.53, stdev=12885.19 00:08:55.430 clat percentiles (usec): 00:08:55.430 | 1.00th=[ 1004], 5.00th=[ 4490], 10.00th=[ 7439], 20.00th=[10814], 00:08:55.430 | 30.00th=[12649], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:08:55.430 | 70.00th=[14615], 80.00th=[17695], 90.00th=[23200], 95.00th=[34341], 00:08:55.430 | 99.00th=[89654], 99.50th=[94897], 99.90th=[96994], 99.95th=[99091], 00:08:55.430 | 99.99th=[99091] 00:08:55.430 bw ( KiB/s): min=16384, max=16384, per=22.86%, avg=16384.00, stdev= 0.00, samples=1 00:08:55.430 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:55.430 lat (usec) : 750=0.05%, 1000=0.45% 00:08:55.430 lat (msec) : 2=0.59%, 4=1.42%, 10=7.90%, 20=81.27%, 50=6.42% 00:08:55.430 lat (msec) : 100=1.91% 00:08:55.430 cpu : usr=3.80%, sys=6.90%, ctx=424, majf=0, minf=1 00:08:55.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:55.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.430 issued rwts: total=4096,4274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.430 00:08:55.430 Run status group 0 (all jobs): 00:08:55.430 READ: bw=66.6MiB/s (69.8MB/s), 14.9MiB/s-20.0MiB/s (15.6MB/s-20.9MB/s), io=67.0MiB (70.2MB), run=1001-1006msec 00:08:55.430 WRITE: bw=70.0MiB/s (73.4MB/s), 15.9MiB/s-20.0MiB/s (16.7MB/s-21.0MB/s), io=70.4MiB (73.8MB), run=1001-1006msec 00:08:55.430 00:08:55.430 Disk stats (read/write): 00:08:55.430 nvme0n1: ios=4146/4590, merge=0/0, ticks=18998/22272, in_queue=41270, util=87.17% 00:08:55.430 nvme0n2: ios=3559/3584, merge=0/0, ticks=23293/27438, in_queue=50731, util=84.06% 00:08:55.430 nvme0n3: ios=3072/3407, merge=0/0, ticks=30114/26777, in_queue=56891, util=88.95% 00:08:55.430 nvme0n4: ios=3457/3584, merge=0/0, ticks=26657/43136, in_queue=69793, util=89.60% 00:08:55.430 07:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:55.431 07:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2430467 00:08:55.431 07:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:55.431 07:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:55.431 [global] 00:08:55.431 thread=1 00:08:55.431 invalidate=1 00:08:55.431 rw=read 00:08:55.431 time_based=1 00:08:55.431 runtime=10 00:08:55.431 ioengine=libaio 00:08:55.431 direct=1 00:08:55.431 bs=4096 00:08:55.431 iodepth=1 00:08:55.431 norandommap=1 00:08:55.431 numjobs=1 00:08:55.431 00:08:55.431 [job0] 00:08:55.431 filename=/dev/nvme0n1 00:08:55.431 [job1] 00:08:55.431 filename=/dev/nvme0n2 00:08:55.431 [job2] 00:08:55.431 filename=/dev/nvme0n3 00:08:55.431 [job3] 00:08:55.431 filename=/dev/nvme0n4 00:08:55.431 Could not set queue depth (nvme0n1) 00:08:55.431 Could not set queue depth (nvme0n2) 00:08:55.431 Could not set queue depth (nvme0n3) 00:08:55.431 Could not set queue depth (nvme0n4) 00:08:55.431 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.431 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.431 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.431 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.431 fio-3.35 00:08:55.431 Starting 4 threads 00:08:58.776 07:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:58.776 07:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:58.776 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=25010176, buflen=4096 00:08:58.776 fio: pid=2430564, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:58.776 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.776 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:58.776 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=17358848, buflen=4096 00:08:58.776 fio: pid=2430563, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:59.034 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.034 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:59.034 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=430080, buflen=4096 00:08:59.034 fio: pid=2430561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:59.292 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.292 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:59.293 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=20344832, buflen=4096 00:08:59.293 fio: pid=2430562, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:59.293 00:08:59.293 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2430561: Wed Nov 20 07:11:02 2024 00:08:59.293 read: IOPS=30, BW=120KiB/s (122kB/s)(420KiB/3514msec) 00:08:59.293 slat (nsec): min=6151, max=38844, avg=19681.87, stdev=8580.72 00:08:59.293 clat (usec): min=199, max=41269, avg=33218.19, stdev=16042.11 00:08:59.293 lat (usec): min=216, max=41285, avg=33237.71, stdev=16044.99 00:08:59.293 clat percentiles (usec): 00:08:59.293 | 1.00th=[ 210], 5.00th=[ 253], 10.00th=[ 302], 20.00th=[40633], 00:08:59.293 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:59.293 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:59.293 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.293 | 99.99th=[41157] 00:08:59.293 bw ( KiB/s): min= 96, max= 200, per=0.72%, avg=118.67, stdev=40.92, samples=6 00:08:59.293 iops : min= 24, max= 50, avg=29.67, stdev=10.23, samples=6 00:08:59.293 lat (usec) : 250=4.72%, 500=14.15% 00:08:59.293 lat (msec) : 50=80.19% 00:08:59.293 cpu : usr=0.09%, sys=0.00%, ctx=107, majf=0, minf=2 00:08:59.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 issued rwts: total=106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.293 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2430562: Wed Nov 20 07:11:02 2024 00:08:59.293 read: IOPS=1314, BW=5257KiB/s (5384kB/s)(19.4MiB/3779msec) 00:08:59.293 slat (usec): min=5, max=3920, avg=13.89, stdev=55.72 00:08:59.293 clat (usec): min=161, max=42191, avg=738.62, stdev=4572.87 00:08:59.293 lat (usec): min=167, max=45975, avg=752.52, stdev=4581.16 00:08:59.293 clat percentiles (usec): 00:08:59.293 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 198], 00:08:59.293 | 30.00th=[ 206], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 235], 00:08:59.293 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:08:59.293 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:08:59.293 | 99.99th=[42206] 00:08:59.293 bw ( KiB/s): min= 96, max=17736, per=34.74%, avg=5669.14, stdev=7357.08, samples=7 00:08:59.293 iops : min= 24, max= 4434, avg=1417.29, stdev=1839.27, samples=7 00:08:59.293 lat (usec) : 250=86.61%, 500=12.04%, 750=0.02%, 1000=0.02% 00:08:59.293 lat (msec) : 2=0.02%, 4=0.02%, 50=1.25% 00:08:59.293 cpu : usr=0.98%, sys=2.75%, ctx=4972, majf=0, minf=1 00:08:59.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.293 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2430563: Wed Nov 20 07:11:02 2024 00:08:59.293 read: IOPS=1321, BW=5286KiB/s (5413kB/s)(16.6MiB/3207msec) 00:08:59.293 slat (nsec): min=4451, max=70699, avg=13937.89, stdev=7157.37 00:08:59.293 clat (usec): min=184, max=41337, avg=733.47, stdev=4434.86 00:08:59.293 lat (usec): min=191, max=41355, avg=747.40, stdev=4435.21 00:08:59.293 clat percentiles (usec): 00:08:59.293 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:08:59.293 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 241], 00:08:59.293 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 347], 00:08:59.293 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.293 | 99.99th=[41157] 00:08:59.293 bw ( KiB/s): min= 144, max=15728, per=34.58%, avg=5642.67, stdev=7304.99, samples=6 00:08:59.293 iops : min= 36, max= 3932, avg=1411.00, stdev=1826.71, samples=6 00:08:59.293 lat (usec) : 250=74.99%, 500=23.21%, 750=0.45%, 1000=0.07% 00:08:59.293 lat (msec) : 2=0.05%, 50=1.20% 00:08:59.293 cpu : usr=1.37%, sys=2.06%, ctx=4240, majf=0, minf=2 00:08:59.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 issued rwts: total=4239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.293 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2430564: Wed Nov 20 07:11:02 2024 00:08:59.293 read: IOPS=2096, BW=8384KiB/s (8586kB/s)(23.9MiB/2913msec) 00:08:59.293 slat (nsec): min=5644, max=61356, avg=12928.80, stdev=5741.27 00:08:59.293 clat (usec): min=177, max=42004, avg=456.89, stdev=2967.15 00:08:59.293 lat (usec): min=183, max=42018, avg=469.82, stdev=2968.04 00:08:59.293 clat percentiles (usec): 00:08:59.293 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 210], 00:08:59.293 | 30.00th=[ 219], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:08:59.293 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 314], 00:08:59.293 | 99.00th=[ 338], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:59.293 | 99.99th=[42206] 00:08:59.293 bw ( KiB/s): min= 96, max=15632, per=43.19%, avg=7048.00, stdev=5837.40, samples=5 00:08:59.293 iops : min= 24, max= 3908, avg=1762.00, stdev=1459.35, samples=5 00:08:59.293 lat (usec) : 250=67.30%, 500=32.03%, 750=0.08%, 1000=0.02% 00:08:59.293 lat (msec) : 2=0.02%, 4=0.02%, 50=0.52% 00:08:59.293 cpu : usr=1.85%, sys=3.98%, ctx=6109, majf=0, minf=1 00:08:59.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.293 issued rwts: total=6107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.293 00:08:59.293 Run status group 0 (all jobs): 00:08:59.293 READ: bw=15.9MiB/s (16.7MB/s), 120KiB/s-8384KiB/s (122kB/s-8586kB/s), io=60.2MiB (63.1MB), run=2913-3779msec 00:08:59.293 00:08:59.293 Disk stats (read/write): 00:08:59.293 nvme0n1: ios=128/0, merge=0/0, ticks=3475/0, in_queue=3475, util=99.43% 00:08:59.293 nvme0n2: ios=4987/0, merge=0/0, ticks=3623/0, in_queue=3623, util=99.14% 00:08:59.293 nvme0n3: ios=4282/0, merge=0/0, ticks=3132/0, in_queue=3132, util=100.00% 00:08:59.293 nvme0n4: ios=5990/0, merge=0/0, ticks=3115/0, in_queue=3115, util=98.98% 00:08:59.551 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.551 07:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:59.810 07:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.810 07:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:00.376 07:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.376 07:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:00.376 07:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.376 07:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:00.633 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:00.633 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2430467 00:09:00.633 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:00.891 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:00.892 nvmf hotplug test: fio failed as expected 00:09:00.892 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.150 rmmod nvme_tcp 00:09:01.150 rmmod nvme_fabrics 00:09:01.150 rmmod nvme_keyring 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2428439 ']' 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2428439 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2428439 ']' 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2428439 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.150 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2428439 00:09:01.409 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.409 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.409 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2428439' 00:09:01.409 killing process with pid 2428439 00:09:01.409 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2428439 00:09:01.409 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2428439 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.667 07:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.573 00:09:03.573 real 0m24.115s 00:09:03.573 user 1m25.190s 00:09:03.573 sys 0m7.060s 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.573 ************************************ 00:09:03.573 END TEST nvmf_fio_target 00:09:03.573 ************************************ 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.573 ************************************ 00:09:03.573 START TEST nvmf_bdevio 00:09:03.573 ************************************ 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:03.573 * Looking for test storage... 00:09:03.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:03.573 07:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:03.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.833 --rc genhtml_branch_coverage=1 00:09:03.833 --rc genhtml_function_coverage=1 00:09:03.833 --rc genhtml_legend=1 00:09:03.833 --rc geninfo_all_blocks=1 00:09:03.833 --rc geninfo_unexecuted_blocks=1 00:09:03.833 00:09:03.833 ' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:03.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.833 --rc genhtml_branch_coverage=1 00:09:03.833 --rc genhtml_function_coverage=1 00:09:03.833 --rc genhtml_legend=1 00:09:03.833 --rc geninfo_all_blocks=1 00:09:03.833 --rc geninfo_unexecuted_blocks=1 00:09:03.833 00:09:03.833 ' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:03.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.833 --rc genhtml_branch_coverage=1 00:09:03.833 --rc genhtml_function_coverage=1 00:09:03.833 --rc genhtml_legend=1 00:09:03.833 --rc geninfo_all_blocks=1 00:09:03.833 --rc geninfo_unexecuted_blocks=1 00:09:03.833 00:09:03.833 ' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:03.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.833 --rc genhtml_branch_coverage=1 00:09:03.833 --rc genhtml_function_coverage=1 00:09:03.833 --rc genhtml_legend=1 00:09:03.833 --rc geninfo_all_blocks=1 00:09:03.833 --rc geninfo_unexecuted_blocks=1 00:09:03.833 00:09:03.833 ' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.833 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.834 07:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.372 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:06.373 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:06.373 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:06.373 Found net devices under 0000:09:00.0: cvl_0_0 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:06.373 Found net devices under 0000:09:00.1: cvl_0_1 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:09:06.373 00:09:06.373 --- 10.0.0.2 ping statistics --- 00:09:06.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.373 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:06.373 00:09:06.373 --- 10.0.0.1 ping statistics --- 00:09:06.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.373 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.373 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2433324 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2433324 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2433324 ']' 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.374 [2024-11-20 07:11:09.526222] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:09:06.374 [2024-11-20 07:11:09.526309] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.374 [2024-11-20 07:11:09.599892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.374 [2024-11-20 07:11:09.659930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.374 [2024-11-20 07:11:09.659979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.374 [2024-11-20 07:11:09.660009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.374 [2024-11-20 07:11:09.660020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.374 [2024-11-20 07:11:09.660029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.374 [2024-11-20 07:11:09.661694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.374 [2024-11-20 07:11:09.661753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:06.374 [2024-11-20 07:11:09.661818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:06.374 [2024-11-20 07:11:09.661822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.374 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.632 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.633 [2024-11-20 07:11:09.815251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.633 Malloc0 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.633 [2024-11-20 07:11:09.884079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.633 { 00:09:06.633 "params": { 00:09:06.633 "name": "Nvme$subsystem", 00:09:06.633 "trtype": "$TEST_TRANSPORT", 00:09:06.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.633 "adrfam": "ipv4", 00:09:06.633 "trsvcid": "$NVMF_PORT", 00:09:06.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.633 "hdgst": ${hdgst:-false}, 00:09:06.633 "ddgst": ${ddgst:-false} 00:09:06.633 }, 00:09:06.633 "method": "bdev_nvme_attach_controller" 00:09:06.633 } 00:09:06.633 EOF 00:09:06.633 )") 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:06.633 07:11:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.633 "params": { 00:09:06.633 "name": "Nvme1", 00:09:06.633 "trtype": "tcp", 00:09:06.633 "traddr": "10.0.0.2", 00:09:06.633 "adrfam": "ipv4", 00:09:06.633 "trsvcid": "4420", 00:09:06.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.633 "hdgst": false, 00:09:06.633 "ddgst": false 00:09:06.633 }, 00:09:06.633 "method": "bdev_nvme_attach_controller" 00:09:06.633 }' 00:09:06.633 [2024-11-20 07:11:09.935849] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:09:06.633 [2024-11-20 07:11:09.935914] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433359 ] 00:09:06.633 [2024-11-20 07:11:10.005193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.891 [2024-11-20 07:11:10.073737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.891 [2024-11-20 07:11:10.073790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.891 [2024-11-20 07:11:10.073794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.891 I/O targets: 00:09:06.891 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:06.891 00:09:06.891 00:09:06.891 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.891 http://cunit.sourceforge.net/ 00:09:06.891 00:09:06.891 00:09:06.891 Suite: bdevio tests on: Nvme1n1 00:09:07.149 Test: blockdev write read block ...passed 00:09:07.149 Test: blockdev write zeroes read block ...passed 00:09:07.149 Test: blockdev write zeroes read no split ...passed 00:09:07.149 Test: blockdev write zeroes read split ...passed 00:09:07.149 Test: blockdev write zeroes read split partial ...passed 00:09:07.149 Test: blockdev reset ...[2024-11-20 07:11:10.460938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:07.149 [2024-11-20 07:11:10.461040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af8640 (9): Bad file descriptor 00:09:07.149 [2024-11-20 07:11:10.475913] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:07.149 passed 00:09:07.149 Test: blockdev write read 8 blocks ...passed 00:09:07.149 Test: blockdev write read size > 128k ...passed 00:09:07.149 Test: blockdev write read invalid size ...passed 00:09:07.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.149 Test: blockdev write read max offset ...passed 00:09:07.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.406 Test: blockdev writev readv 8 blocks ...passed 00:09:07.406 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.406 Test: blockdev writev readv block ...passed 00:09:07.406 Test: blockdev writev readv size > 128k ...passed 00:09:07.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.406 Test: blockdev comparev and writev ...[2024-11-20 07:11:10.691461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.406 [2024-11-20 07:11:10.691498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:07.406 [2024-11-20 07:11:10.691523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.406 [2024-11-20 07:11:10.691541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:07.406 [2024-11-20 07:11:10.691871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.406 [2024-11-20 07:11:10.691896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:07.406 [2024-11-20 07:11:10.691918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.406 [2024-11-20 07:11:10.691935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:07.406 [2024-11-20 07:11:10.692230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.406 [2024-11-20 07:11:10.692262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:07.407 [2024-11-20 07:11:10.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.407 [2024-11-20 07:11:10.692301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:07.407 [2024-11-20 07:11:10.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.407 [2024-11-20 07:11:10.692639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:07.407 [2024-11-20 07:11:10.692660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.407 [2024-11-20 07:11:10.692676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:07.407 passed 00:09:07.407 Test: blockdev nvme passthru rw ...passed 00:09:07.407 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:11:10.776532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.407 [2024-11-20 07:11:10.776560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:07.407 [2024-11-20 07:11:10.776703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.407 [2024-11-20 07:11:10.776726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:07.407 [2024-11-20 07:11:10.776870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.407 [2024-11-20 07:11:10.776892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:07.407 [2024-11-20 07:11:10.777029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.407 [2024-11-20 07:11:10.777052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:07.407 passed 00:09:07.407 Test: blockdev nvme admin passthru ...passed 00:09:07.407 Test: blockdev copy ...passed 00:09:07.407 00:09:07.407 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.407 suites 1 1 n/a 0 0 00:09:07.407 tests 23 23 23 0 0 00:09:07.407 asserts 152 152 152 0 n/a 00:09:07.407 00:09:07.407 Elapsed time = 1.053 seconds 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.665 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.665 rmmod nvme_tcp 00:09:07.665 rmmod nvme_fabrics 00:09:07.665 rmmod nvme_keyring 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2433324 ']' 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2433324 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2433324 ']' 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2433324 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2433324 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2433324' 00:09:07.924 killing process with pid 2433324 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2433324 00:09:07.924 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2433324 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.183 07:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.091 07:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.091 00:09:10.091 real 0m6.525s 00:09:10.091 user 0m9.993s 00:09:10.091 sys 0m2.197s 00:09:10.091 07:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.091 07:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.091 ************************************ 00:09:10.091 END TEST nvmf_bdevio 00:09:10.091 ************************************ 00:09:10.091 07:11:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:10.091 00:09:10.091 real 3m55.812s 00:09:10.091 user 10m14.422s 00:09:10.091 sys 1m7.510s 00:09:10.091 07:11:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.091 07:11:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.091 ************************************ 00:09:10.091 END TEST nvmf_target_core 00:09:10.091 ************************************ 00:09:10.091 07:11:13 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:10.091 07:11:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:10.091 07:11:13 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.091 07:11:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 ************************************ 00:09:10.350 START TEST nvmf_target_extra 00:09:10.350 ************************************ 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:10.350 * Looking for test storage... 00:09:10.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.350 --rc genhtml_branch_coverage=1 00:09:10.350 --rc genhtml_function_coverage=1 00:09:10.350 --rc genhtml_legend=1 00:09:10.350 --rc geninfo_all_blocks=1 00:09:10.350 --rc geninfo_unexecuted_blocks=1 00:09:10.350 00:09:10.350 ' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.350 --rc genhtml_branch_coverage=1 00:09:10.350 --rc genhtml_function_coverage=1 00:09:10.350 --rc genhtml_legend=1 00:09:10.350 --rc geninfo_all_blocks=1 00:09:10.350 --rc geninfo_unexecuted_blocks=1 00:09:10.350 00:09:10.350 ' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.350 --rc genhtml_branch_coverage=1 00:09:10.350 --rc genhtml_function_coverage=1 00:09:10.350 --rc genhtml_legend=1 00:09:10.350 --rc geninfo_all_blocks=1 00:09:10.350 --rc geninfo_unexecuted_blocks=1 00:09:10.350 00:09:10.350 ' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.350 --rc genhtml_branch_coverage=1 00:09:10.350 --rc genhtml_function_coverage=1 00:09:10.350 --rc genhtml_legend=1 00:09:10.350 --rc geninfo_all_blocks=1 00:09:10.350 --rc geninfo_unexecuted_blocks=1 00:09:10.350 00:09:10.350 ' 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.350 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:10.351 ************************************ 00:09:10.351 START TEST nvmf_example 00:09:10.351 ************************************ 00:09:10.351 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:10.610 * Looking for test storage... 00:09:10.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.610 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.611 --rc genhtml_branch_coverage=1 00:09:10.611 --rc genhtml_function_coverage=1 00:09:10.611 --rc genhtml_legend=1 00:09:10.611 --rc geninfo_all_blocks=1 00:09:10.611 --rc geninfo_unexecuted_blocks=1 00:09:10.611 00:09:10.611 ' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.611 --rc genhtml_branch_coverage=1 00:09:10.611 --rc genhtml_function_coverage=1 00:09:10.611 --rc genhtml_legend=1 00:09:10.611 --rc geninfo_all_blocks=1 00:09:10.611 --rc geninfo_unexecuted_blocks=1 00:09:10.611 00:09:10.611 ' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.611 --rc genhtml_branch_coverage=1 00:09:10.611 --rc genhtml_function_coverage=1 00:09:10.611 --rc genhtml_legend=1 00:09:10.611 --rc geninfo_all_blocks=1 00:09:10.611 --rc geninfo_unexecuted_blocks=1 00:09:10.611 00:09:10.611 ' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.611 --rc genhtml_branch_coverage=1 00:09:10.611 --rc genhtml_function_coverage=1 00:09:10.611 --rc genhtml_legend=1 00:09:10.611 --rc geninfo_all_blocks=1 00:09:10.611 --rc geninfo_unexecuted_blocks=1 00:09:10.611 00:09:10.611 ' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:10.611 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.612 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:13.144 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:13.144 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:13.144 Found net devices under 0000:09:00.0: cvl_0_0 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.144 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:13.145 Found net devices under 0000:09:00.1: cvl_0_1 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:09:13.145 00:09:13.145 --- 10.0.0.2 ping statistics --- 00:09:13.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.145 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:09:13.145 00:09:13.145 --- 10.0.0.1 ping statistics --- 00:09:13.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.145 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2435613 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2435613 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2435613 ']' 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.145 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.079 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.079 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:09:14.079 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:14.079 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.079 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:14.080 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:26.272 Initializing NVMe Controllers 00:09:26.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:26.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:26.272 Initialization complete. Launching workers. 00:09:26.272 ======================================================== 00:09:26.272 Latency(us) 00:09:26.272 Device Information : IOPS MiB/s Average min max 00:09:26.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14591.40 57.00 4385.74 865.30 16245.07 00:09:26.272 ======================================================== 00:09:26.272 Total : 14591.40 57.00 4385.74 865.30 16245.07 00:09:26.272 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.272 rmmod nvme_tcp 00:09:26.272 rmmod nvme_fabrics 00:09:26.272 rmmod nvme_keyring 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2435613 ']' 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2435613 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2435613 ']' 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2435613 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2435613 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2435613' 00:09:26.272 killing process with pid 2435613 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2435613 00:09:26.272 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2435613 00:09:26.272 nvmf threads initialize successfully 00:09:26.272 bdev subsystem init successfully 00:09:26.272 created a nvmf target service 00:09:26.272 create targets's poll groups done 00:09:26.272 all subsystems of target started 00:09:26.272 nvmf target is running 00:09:26.272 all subsystems of target stopped 00:09:26.272 destroy targets's poll groups done 00:09:26.272 destroyed the nvmf target service 00:09:26.272 bdev subsystem finish successfully 00:09:26.272 nvmf threads destroy successfully 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.272 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:26.840 00:09:26.840 real 0m16.429s 00:09:26.840 user 0m46.015s 00:09:26.840 sys 0m3.524s 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:26.840 ************************************ 00:09:26.840 END TEST nvmf_example 00:09:26.840 ************************************ 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:26.840 ************************************ 00:09:26.840 START TEST nvmf_filesystem 00:09:26.840 ************************************ 00:09:26.840 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:27.103 * Looking for test storage... 00:09:27.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.103 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:27.103 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:27.103 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:27.103 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:27.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.104 --rc genhtml_branch_coverage=1 00:09:27.104 --rc genhtml_function_coverage=1 00:09:27.104 --rc genhtml_legend=1 00:09:27.104 --rc geninfo_all_blocks=1 00:09:27.104 --rc geninfo_unexecuted_blocks=1 00:09:27.104 00:09:27.104 ' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:27.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.104 --rc genhtml_branch_coverage=1 00:09:27.104 --rc genhtml_function_coverage=1 00:09:27.104 --rc genhtml_legend=1 00:09:27.104 --rc geninfo_all_blocks=1 00:09:27.104 --rc geninfo_unexecuted_blocks=1 00:09:27.104 00:09:27.104 ' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:27.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.104 --rc genhtml_branch_coverage=1 00:09:27.104 --rc genhtml_function_coverage=1 00:09:27.104 --rc genhtml_legend=1 00:09:27.104 --rc geninfo_all_blocks=1 00:09:27.104 --rc geninfo_unexecuted_blocks=1 00:09:27.104 00:09:27.104 ' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:27.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.104 --rc genhtml_branch_coverage=1 00:09:27.104 --rc genhtml_function_coverage=1 00:09:27.104 --rc genhtml_legend=1 00:09:27.104 --rc geninfo_all_blocks=1 00:09:27.104 --rc geninfo_unexecuted_blocks=1 00:09:27.104 00:09:27.104 ' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:27.104 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:27.105 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:27.106 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:27.106 #define SPDK_CONFIG_H 00:09:27.106 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:27.106 #define SPDK_CONFIG_APPS 1 00:09:27.106 #define SPDK_CONFIG_ARCH native 00:09:27.106 #undef SPDK_CONFIG_ASAN 00:09:27.106 #undef SPDK_CONFIG_AVAHI 00:09:27.106 #undef SPDK_CONFIG_CET 00:09:27.106 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:27.106 #define SPDK_CONFIG_COVERAGE 1 00:09:27.106 #define SPDK_CONFIG_CROSS_PREFIX 00:09:27.106 #undef SPDK_CONFIG_CRYPTO 00:09:27.106 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:27.106 #undef SPDK_CONFIG_CUSTOMOCF 00:09:27.106 #undef SPDK_CONFIG_DAOS 00:09:27.106 #define SPDK_CONFIG_DAOS_DIR 00:09:27.106 #define SPDK_CONFIG_DEBUG 1 00:09:27.106 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:27.106 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:27.106 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:27.106 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:27.106 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:27.106 #undef SPDK_CONFIG_DPDK_UADK 00:09:27.106 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:27.106 #define SPDK_CONFIG_EXAMPLES 1 00:09:27.106 #undef SPDK_CONFIG_FC 00:09:27.106 #define SPDK_CONFIG_FC_PATH 00:09:27.106 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:27.106 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:27.106 #define SPDK_CONFIG_FSDEV 1 00:09:27.106 #undef SPDK_CONFIG_FUSE 00:09:27.106 #undef SPDK_CONFIG_FUZZER 00:09:27.106 #define SPDK_CONFIG_FUZZER_LIB 00:09:27.106 #undef SPDK_CONFIG_GOLANG 00:09:27.106 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:27.106 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:27.106 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:27.106 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:27.106 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:27.106 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:27.106 #undef SPDK_CONFIG_HAVE_LZ4 00:09:27.106 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:27.106 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:27.106 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:27.106 #define SPDK_CONFIG_IDXD 1 00:09:27.106 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:27.106 #undef SPDK_CONFIG_IPSEC_MB 00:09:27.106 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:27.106 #define SPDK_CONFIG_ISAL 1 00:09:27.106 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:27.106 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:27.106 #define SPDK_CONFIG_LIBDIR 00:09:27.106 #undef SPDK_CONFIG_LTO 00:09:27.106 #define SPDK_CONFIG_MAX_LCORES 128 00:09:27.106 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:27.106 #define SPDK_CONFIG_NVME_CUSE 1 00:09:27.106 #undef SPDK_CONFIG_OCF 00:09:27.106 #define SPDK_CONFIG_OCF_PATH 00:09:27.106 #define SPDK_CONFIG_OPENSSL_PATH 00:09:27.106 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:27.106 #define SPDK_CONFIG_PGO_DIR 00:09:27.106 #undef SPDK_CONFIG_PGO_USE 00:09:27.106 #define SPDK_CONFIG_PREFIX /usr/local 00:09:27.106 #undef SPDK_CONFIG_RAID5F 00:09:27.106 #undef SPDK_CONFIG_RBD 00:09:27.106 #define SPDK_CONFIG_RDMA 1 00:09:27.106 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:27.106 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:27.106 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:27.106 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:27.106 #define SPDK_CONFIG_SHARED 1 00:09:27.106 #undef SPDK_CONFIG_SMA 00:09:27.106 #define SPDK_CONFIG_TESTS 1 00:09:27.106 #undef SPDK_CONFIG_TSAN 00:09:27.106 #define SPDK_CONFIG_UBLK 1 00:09:27.106 #define SPDK_CONFIG_UBSAN 1 00:09:27.106 #undef SPDK_CONFIG_UNIT_TESTS 00:09:27.106 #undef SPDK_CONFIG_URING 00:09:27.107 #define SPDK_CONFIG_URING_PATH 00:09:27.107 #undef SPDK_CONFIG_URING_ZNS 00:09:27.107 #undef SPDK_CONFIG_USDT 00:09:27.107 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:27.107 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:27.107 #define SPDK_CONFIG_VFIO_USER 1 00:09:27.107 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:27.107 #define SPDK_CONFIG_VHOST 1 00:09:27.107 #define SPDK_CONFIG_VIRTIO 1 00:09:27.107 #undef SPDK_CONFIG_VTUNE 00:09:27.107 #define SPDK_CONFIG_VTUNE_DIR 00:09:27.107 #define SPDK_CONFIG_WERROR 1 00:09:27.107 #define SPDK_CONFIG_WPDK_DIR 00:09:27.107 #undef SPDK_CONFIG_XNVME 00:09:27.107 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:27.107 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:27.108 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:27.109 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2437323 ]] 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2437323 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.E5Q1Gq 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.E5Q1Gq/tests/target /tmp/spdk.E5Q1Gq 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:27.110 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=50856038400 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988519936 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11132481536 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982893568 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22441984 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=29919756288 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074503680 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:27.111 * Looking for test storage... 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=50856038400 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13347074048 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:27.111 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:27.371 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:27.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.372 --rc genhtml_branch_coverage=1 00:09:27.372 --rc genhtml_function_coverage=1 00:09:27.372 --rc genhtml_legend=1 00:09:27.372 --rc geninfo_all_blocks=1 00:09:27.372 --rc geninfo_unexecuted_blocks=1 00:09:27.372 00:09:27.372 ' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:27.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.372 --rc genhtml_branch_coverage=1 00:09:27.372 --rc genhtml_function_coverage=1 00:09:27.372 --rc genhtml_legend=1 00:09:27.372 --rc geninfo_all_blocks=1 00:09:27.372 --rc geninfo_unexecuted_blocks=1 00:09:27.372 00:09:27.372 ' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:27.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.372 --rc genhtml_branch_coverage=1 00:09:27.372 --rc genhtml_function_coverage=1 00:09:27.372 --rc genhtml_legend=1 00:09:27.372 --rc geninfo_all_blocks=1 00:09:27.372 --rc geninfo_unexecuted_blocks=1 00:09:27.372 00:09:27.372 ' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:27.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.372 --rc genhtml_branch_coverage=1 00:09:27.372 --rc genhtml_function_coverage=1 00:09:27.372 --rc genhtml_legend=1 00:09:27.372 --rc geninfo_all_blocks=1 00:09:27.372 --rc geninfo_unexecuted_blocks=1 00:09:27.372 00:09:27.372 ' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.372 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:29.907 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:29.907 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:29.907 Found net devices under 0000:09:00.0: cvl_0_0 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:29.907 Found net devices under 0000:09:00.1: cvl_0_1 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.907 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:09:29.908 00:09:29.908 --- 10.0.0.2 ping statistics --- 00:09:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.908 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:29.908 00:09:29.908 --- 10.0.0.1 ping statistics --- 00:09:29.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.908 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.908 ************************************ 00:09:29.908 START TEST nvmf_filesystem_no_in_capsule 00:09:29.908 ************************************ 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2438967 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.908 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2438967 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2438967 ']' 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.908 [2024-11-20 07:11:33.051642] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:09:29.908 [2024-11-20 07:11:33.051737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.908 [2024-11-20 07:11:33.126958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.908 [2024-11-20 07:11:33.190127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.908 [2024-11-20 07:11:33.190176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.908 [2024-11-20 07:11:33.190189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.908 [2024-11-20 07:11:33.190200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.908 [2024-11-20 07:11:33.190209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.908 [2024-11-20 07:11:33.191830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.908 [2024-11-20 07:11:33.191905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.908 [2024-11-20 07:11:33.191956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.908 [2024-11-20 07:11:33.191965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.908 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 [2024-11-20 07:11:33.343888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 [2024-11-20 07:11:33.556087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.167 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:09:30.167 { 00:09:30.167 "name": "Malloc1", 00:09:30.167 "aliases": [ 00:09:30.167 "d278b9aa-6cb6-436f-b397-8d5dae6f8cbf" 00:09:30.167 ], 00:09:30.167 "product_name": "Malloc disk", 00:09:30.167 "block_size": 512, 00:09:30.167 "num_blocks": 1048576, 00:09:30.167 "uuid": "d278b9aa-6cb6-436f-b397-8d5dae6f8cbf", 00:09:30.167 "assigned_rate_limits": { 00:09:30.167 "rw_ios_per_sec": 0, 00:09:30.167 "rw_mbytes_per_sec": 0, 00:09:30.167 "r_mbytes_per_sec": 0, 00:09:30.167 "w_mbytes_per_sec": 0 00:09:30.167 }, 00:09:30.167 "claimed": true, 00:09:30.167 "claim_type": "exclusive_write", 00:09:30.167 "zoned": false, 00:09:30.167 "supported_io_types": { 00:09:30.167 "read": true, 00:09:30.167 "write": true, 00:09:30.167 "unmap": true, 00:09:30.167 "flush": true, 00:09:30.167 "reset": true, 00:09:30.167 "nvme_admin": false, 00:09:30.167 "nvme_io": false, 00:09:30.167 "nvme_io_md": false, 00:09:30.167 "write_zeroes": true, 00:09:30.167 "zcopy": true, 00:09:30.167 "get_zone_info": false, 00:09:30.167 "zone_management": false, 00:09:30.167 "zone_append": false, 00:09:30.167 "compare": false, 00:09:30.167 "compare_and_write": false, 00:09:30.168 "abort": true, 00:09:30.168 "seek_hole": false, 00:09:30.168 "seek_data": false, 00:09:30.168 "copy": true, 00:09:30.168 "nvme_iov_md": false 00:09:30.168 }, 00:09:30.168 "memory_domains": [ 00:09:30.168 { 00:09:30.168 "dma_device_id": "system", 00:09:30.168 "dma_device_type": 1 00:09:30.168 }, 00:09:30.168 { 00:09:30.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.168 "dma_device_type": 2 00:09:30.168 } 00:09:30.168 ], 00:09:30.168 "driver_specific": {} 00:09:30.168 } 00:09:30.168 ]' 00:09:30.168 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:30.425 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.990 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.990 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:09:30.990 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.990 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:30.990 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:33.518 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:33.519 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:33.519 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:33.519 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:33.519 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.459 ************************************ 00:09:34.459 START TEST filesystem_ext4 00:09:34.459 ************************************ 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:09:34.459 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:34.459 mke2fs 1.47.0 (5-Feb-2023) 00:09:34.796 Discarding device blocks: 0/522240 done 00:09:34.796 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:34.796 Filesystem UUID: 7a0866af-80e6-4bcb-b83a-c989b6a5ff15 00:09:34.796 Superblock backups stored on blocks: 00:09:34.796 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:34.796 00:09:34.796 Allocating group tables: 0/64 done 00:09:34.796 Writing inode tables: 0/64 done 00:09:34.796 Creating journal (8192 blocks): done 00:09:34.796 Writing superblocks and filesystem accounting information: 0/64 done 00:09:34.796 00:09:34.796 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:09:34.796 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2438967 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:40.079 00:09:40.079 real 0m5.610s 00:09:40.079 user 0m0.017s 00:09:40.079 sys 0m0.065s 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:40.079 ************************************ 00:09:40.079 END TEST filesystem_ext4 00:09:40.079 ************************************ 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.079 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.337 ************************************ 00:09:40.337 START TEST filesystem_btrfs 00:09:40.337 ************************************ 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:40.337 btrfs-progs v6.8.1 00:09:40.337 See https://btrfs.readthedocs.io for more information. 00:09:40.337 00:09:40.337 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:40.337 NOTE: several default settings have changed in version 5.15, please make sure 00:09:40.337 this does not affect your deployments: 00:09:40.337 - DUP for metadata (-m dup) 00:09:40.337 - enabled no-holes (-O no-holes) 00:09:40.337 - enabled free-space-tree (-R free-space-tree) 00:09:40.337 00:09:40.337 Label: (null) 00:09:40.337 UUID: 38f6f671-1831-4bbf-9b78-98039ee37131 00:09:40.337 Node size: 16384 00:09:40.337 Sector size: 4096 (CPU page size: 4096) 00:09:40.337 Filesystem size: 510.00MiB 00:09:40.337 Block group profiles: 00:09:40.337 Data: single 8.00MiB 00:09:40.337 Metadata: DUP 32.00MiB 00:09:40.337 System: DUP 8.00MiB 00:09:40.337 SSD detected: yes 00:09:40.337 Zoned device: no 00:09:40.337 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:40.337 Checksum: crc32c 00:09:40.337 Number of devices: 1 00:09:40.337 Devices: 00:09:40.337 ID SIZE PATH 00:09:40.337 1 510.00MiB /dev/nvme0n1p1 00:09:40.337 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:09:40.337 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:40.594 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:40.594 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:40.594 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:40.594 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:40.594 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:40.594 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2438967 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:40.853 00:09:40.853 real 0m0.544s 00:09:40.853 user 0m0.025s 00:09:40.853 sys 0m0.094s 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:40.853 ************************************ 00:09:40.853 END TEST filesystem_btrfs 00:09:40.853 ************************************ 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.853 ************************************ 00:09:40.853 START TEST filesystem_xfs 00:09:40.853 ************************************ 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:09:40.853 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:40.853 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:40.853 = sectsz=512 attr=2, projid32bit=1 00:09:40.853 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:40.853 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:40.853 data = bsize=4096 blocks=130560, imaxpct=25 00:09:40.853 = sunit=0 swidth=0 blks 00:09:40.853 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:40.853 log =internal log bsize=4096 blocks=16384, version=2 00:09:40.853 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:40.853 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:41.787 Discarding blocks...Done. 00:09:41.787 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:09:41.787 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:44.312 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2438967 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:44.313 00:09:44.313 real 0m3.329s 00:09:44.313 user 0m0.024s 00:09:44.313 sys 0m0.058s 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 ************************************ 00:09:44.313 END TEST filesystem_xfs 00:09:44.313 ************************************ 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2438967 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2438967 ']' 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2438967 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2438967 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2438967' 00:09:44.313 killing process with pid 2438967 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2438967 00:09:44.313 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2438967 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:44.878 00:09:44.878 real 0m15.099s 00:09:44.878 user 0m58.322s 00:09:44.878 sys 0m1.966s 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.878 ************************************ 00:09:44.878 END TEST nvmf_filesystem_no_in_capsule 00:09:44.878 ************************************ 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.878 ************************************ 00:09:44.878 START TEST nvmf_filesystem_in_capsule 00:09:44.878 ************************************ 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2441049 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2441049 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2441049 ']' 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:44.878 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.878 [2024-11-20 07:11:48.200687] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:09:44.878 [2024-11-20 07:11:48.200762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.878 [2024-11-20 07:11:48.271776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.136 [2024-11-20 07:11:48.331217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.136 [2024-11-20 07:11:48.331262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.136 [2024-11-20 07:11:48.331301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.136 [2024-11-20 07:11:48.331323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.136 [2024-11-20 07:11:48.331347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.136 [2024-11-20 07:11:48.332929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.136 [2024-11-20 07:11:48.333005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.136 [2024-11-20 07:11:48.333058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.136 [2024-11-20 07:11:48.333064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.136 [2024-11-20 07:11:48.484266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.136 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 Malloc1 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 [2024-11-20 07:11:48.691229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:09:45.394 { 00:09:45.394 "name": "Malloc1", 00:09:45.394 "aliases": [ 00:09:45.394 "a55b99da-ebf6-423b-b4b1-2e40f675480d" 00:09:45.394 ], 00:09:45.394 "product_name": "Malloc disk", 00:09:45.394 "block_size": 512, 00:09:45.394 "num_blocks": 1048576, 00:09:45.394 "uuid": "a55b99da-ebf6-423b-b4b1-2e40f675480d", 00:09:45.394 "assigned_rate_limits": { 00:09:45.394 "rw_ios_per_sec": 0, 00:09:45.394 "rw_mbytes_per_sec": 0, 00:09:45.394 "r_mbytes_per_sec": 0, 00:09:45.394 "w_mbytes_per_sec": 0 00:09:45.394 }, 00:09:45.394 "claimed": true, 00:09:45.394 "claim_type": "exclusive_write", 00:09:45.394 "zoned": false, 00:09:45.394 "supported_io_types": { 00:09:45.394 "read": true, 00:09:45.394 "write": true, 00:09:45.394 "unmap": true, 00:09:45.394 "flush": true, 00:09:45.394 "reset": true, 00:09:45.394 "nvme_admin": false, 00:09:45.394 "nvme_io": false, 00:09:45.394 "nvme_io_md": false, 00:09:45.394 "write_zeroes": true, 00:09:45.394 "zcopy": true, 00:09:45.394 "get_zone_info": false, 00:09:45.394 "zone_management": false, 00:09:45.394 "zone_append": false, 00:09:45.394 "compare": false, 00:09:45.394 "compare_and_write": false, 00:09:45.394 "abort": true, 00:09:45.394 "seek_hole": false, 00:09:45.394 "seek_data": false, 00:09:45.394 "copy": true, 00:09:45.394 "nvme_iov_md": false 00:09:45.394 }, 00:09:45.394 "memory_domains": [ 00:09:45.394 { 00:09:45.394 "dma_device_id": "system", 00:09:45.394 "dma_device_type": 1 00:09:45.394 }, 00:09:45.394 { 00:09:45.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.394 "dma_device_type": 2 00:09:45.394 } 00:09:45.394 ], 00:09:45.394 "driver_specific": {} 00:09:45.394 } 00:09:45.394 ]' 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:45.394 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:46.326 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:46.326 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:09:46.326 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.326 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:46.326 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:48.226 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:48.484 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:49.415 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:50.348 ************************************ 00:09:50.348 START TEST filesystem_in_capsule_ext4 00:09:50.348 ************************************ 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:09:50.348 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:50.348 mke2fs 1.47.0 (5-Feb-2023) 00:09:50.348 Discarding device blocks: 0/522240 done 00:09:50.348 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:50.348 Filesystem UUID: 0236cb2f-3215-4d5b-8d6a-5b834be633b5 00:09:50.348 Superblock backups stored on blocks: 00:09:50.348 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:50.348 00:09:50.348 Allocating group tables: 0/64 done 00:09:50.348 Writing inode tables: 0/64 done 00:09:50.348 Creating journal (8192 blocks): done 00:09:51.477 Writing superblocks and filesystem accounting information: 0/64 done 00:09:51.477 00:09:51.477 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:09:51.477 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.736 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.736 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:56.736 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.736 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:56.736 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:56.736 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2441049 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.736 00:09:56.736 real 0m6.547s 00:09:56.736 user 0m0.013s 00:09:56.736 sys 0m0.071s 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:56.736 ************************************ 00:09:56.736 END TEST filesystem_in_capsule_ext4 00:09:56.736 ************************************ 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.736 ************************************ 00:09:56.736 START TEST filesystem_in_capsule_btrfs 00:09:56.736 ************************************ 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:56.736 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:09:56.737 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:09:56.737 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:09:56.737 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:09:56.737 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:56.994 btrfs-progs v6.8.1 00:09:56.994 See https://btrfs.readthedocs.io for more information. 00:09:56.994 00:09:56.994 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:56.994 NOTE: several default settings have changed in version 5.15, please make sure 00:09:56.994 this does not affect your deployments: 00:09:56.994 - DUP for metadata (-m dup) 00:09:56.994 - enabled no-holes (-O no-holes) 00:09:56.994 - enabled free-space-tree (-R free-space-tree) 00:09:56.994 00:09:56.994 Label: (null) 00:09:56.994 UUID: 42bca18d-3ccb-46a1-91a1-4df94a1f399c 00:09:56.994 Node size: 16384 00:09:56.994 Sector size: 4096 (CPU page size: 4096) 00:09:56.994 Filesystem size: 510.00MiB 00:09:56.994 Block group profiles: 00:09:56.994 Data: single 8.00MiB 00:09:56.994 Metadata: DUP 32.00MiB 00:09:56.994 System: DUP 8.00MiB 00:09:56.994 SSD detected: yes 00:09:56.994 Zoned device: no 00:09:56.994 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:56.994 Checksum: crc32c 00:09:56.994 Number of devices: 1 00:09:56.994 Devices: 00:09:56.994 ID SIZE PATH 00:09:56.994 1 510.00MiB /dev/nvme0n1p1 00:09:56.994 00:09:56.994 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:09:56.994 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2441049 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:57.925 00:09:57.925 real 0m0.960s 00:09:57.925 user 0m0.010s 00:09:57.925 sys 0m0.105s 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:57.925 ************************************ 00:09:57.925 END TEST filesystem_in_capsule_btrfs 00:09:57.925 ************************************ 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.925 ************************************ 00:09:57.925 START TEST filesystem_in_capsule_xfs 00:09:57.925 ************************************ 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:09:57.925 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:57.925 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:57.925 = sectsz=512 attr=2, projid32bit=1 00:09:57.925 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:57.925 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:57.925 data = bsize=4096 blocks=130560, imaxpct=25 00:09:57.925 = sunit=0 swidth=0 blks 00:09:57.925 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:57.925 log =internal log bsize=4096 blocks=16384, version=2 00:09:57.925 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:57.925 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:59.295 Discarding blocks...Done. 00:09:59.295 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:09:59.295 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.189 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.189 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:01.189 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.189 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2441049 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.446 00:10:01.446 real 0m3.525s 00:10:01.446 user 0m0.014s 00:10:01.446 sys 0m0.064s 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:01.446 ************************************ 00:10:01.446 END TEST filesystem_in_capsule_xfs 00:10:01.446 ************************************ 00:10:01.446 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:01.720 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:01.720 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2441049 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2441049 ']' 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2441049 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:01.721 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2441049 00:10:01.978 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:01.978 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:01.978 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2441049' 00:10:01.978 killing process with pid 2441049 00:10:01.978 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2441049 00:10:01.978 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2441049 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:02.236 00:10:02.236 real 0m17.476s 00:10:02.236 user 1m7.548s 00:10:02.236 sys 0m2.258s 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.236 ************************************ 00:10:02.236 END TEST nvmf_filesystem_in_capsule 00:10:02.236 ************************************ 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.236 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.236 rmmod nvme_tcp 00:10:02.495 rmmod nvme_fabrics 00:10:02.495 rmmod nvme_keyring 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.495 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.400 00:10:04.400 real 0m37.535s 00:10:04.400 user 2m7.015s 00:10:04.400 sys 0m6.062s 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.400 ************************************ 00:10:04.400 END TEST nvmf_filesystem 00:10:04.400 ************************************ 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.400 ************************************ 00:10:04.400 START TEST nvmf_target_discovery 00:10:04.400 ************************************ 00:10:04.400 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:04.659 * Looking for test storage... 00:10:04.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:04.659 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.660 --rc genhtml_branch_coverage=1 00:10:04.660 --rc genhtml_function_coverage=1 00:10:04.660 --rc genhtml_legend=1 00:10:04.660 --rc geninfo_all_blocks=1 00:10:04.660 --rc geninfo_unexecuted_blocks=1 00:10:04.660 00:10:04.660 ' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.660 --rc genhtml_branch_coverage=1 00:10:04.660 --rc genhtml_function_coverage=1 00:10:04.660 --rc genhtml_legend=1 00:10:04.660 --rc geninfo_all_blocks=1 00:10:04.660 --rc geninfo_unexecuted_blocks=1 00:10:04.660 00:10:04.660 ' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.660 --rc genhtml_branch_coverage=1 00:10:04.660 --rc genhtml_function_coverage=1 00:10:04.660 --rc genhtml_legend=1 00:10:04.660 --rc geninfo_all_blocks=1 00:10:04.660 --rc geninfo_unexecuted_blocks=1 00:10:04.660 00:10:04.660 ' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.660 --rc genhtml_branch_coverage=1 00:10:04.660 --rc genhtml_function_coverage=1 00:10:04.660 --rc genhtml_legend=1 00:10:04.660 --rc geninfo_all_blocks=1 00:10:04.660 --rc geninfo_unexecuted_blocks=1 00:10:04.660 00:10:04.660 ' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.660 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.661 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:07.191 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:07.191 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:07.191 Found net devices under 0000:09:00.0: cvl_0_0 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:07.191 Found net devices under 0000:09:00.1: cvl_0_1 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.191 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:10:07.192 00:10:07.192 --- 10.0.0.2 ping statistics --- 00:10:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.192 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:10:07.192 00:10:07.192 --- 10.0.0.1 ping statistics --- 00:10:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.192 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2445828 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2445828 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2445828 ']' 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 [2024-11-20 07:12:10.257450] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:10:07.192 [2024-11-20 07:12:10.257537] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.192 [2024-11-20 07:12:10.335841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.192 [2024-11-20 07:12:10.397452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.192 [2024-11-20 07:12:10.397497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.192 [2024-11-20 07:12:10.397526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.192 [2024-11-20 07:12:10.397538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.192 [2024-11-20 07:12:10.397549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.192 [2024-11-20 07:12:10.399132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.192 [2024-11-20 07:12:10.399157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.192 [2024-11-20 07:12:10.399214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.192 [2024-11-20 07:12:10.399217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 [2024-11-20 07:12:10.560077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 Null1 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 [2024-11-20 07:12:10.607506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.192 Null2 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.192 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 Null3 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 Null4 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.450 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:07.451 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.451 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.451 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.451 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:10:07.708 00:10:07.708 Discovery Log Number of Records 6, Generation counter 6 00:10:07.708 =====Discovery Log Entry 0====== 00:10:07.708 trtype: tcp 00:10:07.708 adrfam: ipv4 00:10:07.708 subtype: current discovery subsystem 00:10:07.708 treq: not required 00:10:07.708 portid: 0 00:10:07.708 trsvcid: 4420 00:10:07.708 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:07.708 traddr: 10.0.0.2 00:10:07.708 eflags: explicit discovery connections, duplicate discovery information 00:10:07.708 sectype: none 00:10:07.708 =====Discovery Log Entry 1====== 00:10:07.708 trtype: tcp 00:10:07.708 adrfam: ipv4 00:10:07.708 subtype: nvme subsystem 00:10:07.708 treq: not required 00:10:07.708 portid: 0 00:10:07.708 trsvcid: 4420 00:10:07.708 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:07.708 traddr: 10.0.0.2 00:10:07.708 eflags: none 00:10:07.708 sectype: none 00:10:07.708 =====Discovery Log Entry 2====== 00:10:07.708 trtype: tcp 00:10:07.708 adrfam: ipv4 00:10:07.708 subtype: nvme subsystem 00:10:07.708 treq: not required 00:10:07.708 portid: 0 00:10:07.708 trsvcid: 4420 00:10:07.708 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:07.708 traddr: 10.0.0.2 00:10:07.708 eflags: none 00:10:07.708 sectype: none 00:10:07.708 =====Discovery Log Entry 3====== 00:10:07.708 trtype: tcp 00:10:07.708 adrfam: ipv4 00:10:07.708 subtype: nvme subsystem 00:10:07.708 treq: not required 00:10:07.708 portid: 0 00:10:07.708 trsvcid: 4420 00:10:07.709 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:07.709 traddr: 10.0.0.2 00:10:07.709 eflags: none 00:10:07.709 sectype: none 00:10:07.709 =====Discovery Log Entry 4====== 00:10:07.709 trtype: tcp 00:10:07.709 adrfam: ipv4 00:10:07.709 subtype: nvme subsystem 00:10:07.709 treq: not required 00:10:07.709 portid: 0 00:10:07.709 trsvcid: 4420 00:10:07.709 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:07.709 traddr: 10.0.0.2 00:10:07.709 eflags: none 00:10:07.709 sectype: none 00:10:07.709 =====Discovery Log Entry 5====== 00:10:07.709 trtype: tcp 00:10:07.709 adrfam: ipv4 00:10:07.709 subtype: discovery subsystem referral 00:10:07.709 treq: not required 00:10:07.709 portid: 0 00:10:07.709 trsvcid: 4430 00:10:07.709 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:07.709 traddr: 10.0.0.2 00:10:07.709 eflags: none 00:10:07.709 sectype: none 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:07.709 Perform nvmf subsystem discovery via RPC 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 [ 00:10:07.709 { 00:10:07.709 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:07.709 "subtype": "Discovery", 00:10:07.709 "listen_addresses": [ 00:10:07.709 { 00:10:07.709 "trtype": "TCP", 00:10:07.709 "adrfam": "IPv4", 00:10:07.709 "traddr": "10.0.0.2", 00:10:07.709 "trsvcid": "4420" 00:10:07.709 } 00:10:07.709 ], 00:10:07.709 "allow_any_host": true, 00:10:07.709 "hosts": [] 00:10:07.709 }, 00:10:07.709 { 00:10:07.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.709 "subtype": "NVMe", 00:10:07.709 "listen_addresses": [ 00:10:07.709 { 00:10:07.709 "trtype": "TCP", 00:10:07.709 "adrfam": "IPv4", 00:10:07.709 "traddr": "10.0.0.2", 00:10:07.709 "trsvcid": "4420" 00:10:07.709 } 00:10:07.709 ], 00:10:07.709 "allow_any_host": true, 00:10:07.709 "hosts": [], 00:10:07.709 "serial_number": "SPDK00000000000001", 00:10:07.709 "model_number": "SPDK bdev Controller", 00:10:07.709 "max_namespaces": 32, 00:10:07.709 "min_cntlid": 1, 00:10:07.709 "max_cntlid": 65519, 00:10:07.709 "namespaces": [ 00:10:07.709 { 00:10:07.709 "nsid": 1, 00:10:07.709 "bdev_name": "Null1", 00:10:07.709 "name": "Null1", 00:10:07.709 "nguid": "BB60BE3380544A6CA9E381C7218EB6F0", 00:10:07.709 "uuid": "bb60be33-8054-4a6c-a9e3-81c7218eb6f0" 00:10:07.709 } 00:10:07.709 ] 00:10:07.709 }, 00:10:07.709 { 00:10:07.709 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:07.709 "subtype": "NVMe", 00:10:07.709 "listen_addresses": [ 00:10:07.709 { 00:10:07.709 "trtype": "TCP", 00:10:07.709 "adrfam": "IPv4", 00:10:07.709 "traddr": "10.0.0.2", 00:10:07.709 "trsvcid": "4420" 00:10:07.709 } 00:10:07.709 ], 00:10:07.709 "allow_any_host": true, 00:10:07.709 "hosts": [], 00:10:07.709 "serial_number": "SPDK00000000000002", 00:10:07.709 "model_number": "SPDK bdev Controller", 00:10:07.709 "max_namespaces": 32, 00:10:07.709 "min_cntlid": 1, 00:10:07.709 "max_cntlid": 65519, 00:10:07.709 "namespaces": [ 00:10:07.709 { 00:10:07.709 "nsid": 1, 00:10:07.709 "bdev_name": "Null2", 00:10:07.709 "name": "Null2", 00:10:07.709 "nguid": "63BC6412CBA0403F978DFEC89F0849CD", 00:10:07.709 "uuid": "63bc6412-cba0-403f-978d-fec89f0849cd" 00:10:07.709 } 00:10:07.709 ] 00:10:07.709 }, 00:10:07.709 { 00:10:07.709 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:07.709 "subtype": "NVMe", 00:10:07.709 "listen_addresses": [ 00:10:07.709 { 00:10:07.709 "trtype": "TCP", 00:10:07.709 "adrfam": "IPv4", 00:10:07.709 "traddr": "10.0.0.2", 00:10:07.709 "trsvcid": "4420" 00:10:07.709 } 00:10:07.709 ], 00:10:07.709 "allow_any_host": true, 00:10:07.709 "hosts": [], 00:10:07.709 "serial_number": "SPDK00000000000003", 00:10:07.709 "model_number": "SPDK bdev Controller", 00:10:07.709 "max_namespaces": 32, 00:10:07.709 "min_cntlid": 1, 00:10:07.709 "max_cntlid": 65519, 00:10:07.709 "namespaces": [ 00:10:07.709 { 00:10:07.709 "nsid": 1, 00:10:07.709 "bdev_name": "Null3", 00:10:07.709 "name": "Null3", 00:10:07.709 "nguid": "8191BBB5399D4D12B599C07D5C5397C0", 00:10:07.709 "uuid": "8191bbb5-399d-4d12-b599-c07d5c5397c0" 00:10:07.709 } 00:10:07.709 ] 00:10:07.709 }, 00:10:07.709 { 00:10:07.709 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:07.709 "subtype": "NVMe", 00:10:07.709 "listen_addresses": [ 00:10:07.709 { 00:10:07.709 "trtype": "TCP", 00:10:07.709 "adrfam": "IPv4", 00:10:07.709 "traddr": "10.0.0.2", 00:10:07.709 "trsvcid": "4420" 00:10:07.709 } 00:10:07.709 ], 00:10:07.709 "allow_any_host": true, 00:10:07.709 "hosts": [], 00:10:07.709 "serial_number": "SPDK00000000000004", 00:10:07.709 "model_number": "SPDK bdev Controller", 00:10:07.709 "max_namespaces": 32, 00:10:07.709 "min_cntlid": 1, 00:10:07.709 "max_cntlid": 65519, 00:10:07.709 "namespaces": [ 00:10:07.709 { 00:10:07.709 "nsid": 1, 00:10:07.709 "bdev_name": "Null4", 00:10:07.709 "name": "Null4", 00:10:07.709 "nguid": "4B11B4D172B94BEBA1C3E38AAC18FA8D", 00:10:07.709 "uuid": "4b11b4d1-72b9-4beb-a1c3-e38aac18fa8d" 00:10:07.709 } 00:10:07.709 ] 00:10:07.709 } 00:10:07.709 ] 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.709 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.710 rmmod nvme_tcp 00:10:07.710 rmmod nvme_fabrics 00:10:07.710 rmmod nvme_keyring 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2445828 ']' 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2445828 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2445828 ']' 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2445828 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.710 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2445828 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2445828' 00:10:07.968 killing process with pid 2445828 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2445828 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2445828 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.968 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.226 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.226 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.226 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.226 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.226 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.128 00:10:10.128 real 0m5.641s 00:10:10.128 user 0m4.886s 00:10:10.128 sys 0m1.937s 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:10.128 ************************************ 00:10:10.128 END TEST nvmf_target_discovery 00:10:10.128 ************************************ 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:10.128 ************************************ 00:10:10.128 START TEST nvmf_referrals 00:10:10.128 ************************************ 00:10:10.128 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:10.128 * Looking for test storage... 00:10:10.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.387 --rc genhtml_branch_coverage=1 00:10:10.387 --rc genhtml_function_coverage=1 00:10:10.387 --rc genhtml_legend=1 00:10:10.387 --rc geninfo_all_blocks=1 00:10:10.387 --rc geninfo_unexecuted_blocks=1 00:10:10.387 00:10:10.387 ' 00:10:10.387 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.387 --rc genhtml_branch_coverage=1 00:10:10.388 --rc genhtml_function_coverage=1 00:10:10.388 --rc genhtml_legend=1 00:10:10.388 --rc geninfo_all_blocks=1 00:10:10.388 --rc geninfo_unexecuted_blocks=1 00:10:10.388 00:10:10.388 ' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.388 --rc genhtml_branch_coverage=1 00:10:10.388 --rc genhtml_function_coverage=1 00:10:10.388 --rc genhtml_legend=1 00:10:10.388 --rc geninfo_all_blocks=1 00:10:10.388 --rc geninfo_unexecuted_blocks=1 00:10:10.388 00:10:10.388 ' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.388 --rc genhtml_branch_coverage=1 00:10:10.388 --rc genhtml_function_coverage=1 00:10:10.388 --rc genhtml_legend=1 00:10:10.388 --rc geninfo_all_blocks=1 00:10:10.388 --rc geninfo_unexecuted_blocks=1 00:10:10.388 00:10:10.388 ' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.388 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.389 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:12.288 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:12.288 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:12.288 Found net devices under 0000:09:00.0: cvl_0_0 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:12.288 Found net devices under 0000:09:00.1: cvl_0_1 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.288 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.289 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:10:12.547 00:10:12.547 --- 10.0.0.2 ping statistics --- 00:10:12.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.547 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:10:12.547 00:10:12.547 --- 10.0.0.1 ping statistics --- 00:10:12.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.547 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2447928 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2447928 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2447928 ']' 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:12.547 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.547 [2024-11-20 07:12:15.911718] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:10:12.547 [2024-11-20 07:12:15.911812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.804 [2024-11-20 07:12:15.986198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.804 [2024-11-20 07:12:16.047529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.804 [2024-11-20 07:12:16.047584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.804 [2024-11-20 07:12:16.047614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.804 [2024-11-20 07:12:16.047625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.804 [2024-11-20 07:12:16.047634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.804 [2024-11-20 07:12:16.049171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.804 [2024-11-20 07:12:16.049230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.804 [2024-11-20 07:12:16.049298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.804 [2024-11-20 07:12:16.049311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.804 [2024-11-20 07:12:16.212196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.804 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.061 [2024-11-20 07:12:16.235501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:13.061 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:13.318 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:13.576 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.834 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:14.091 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:14.348 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:14.604 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.605 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:14.605 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.864 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.864 rmmod nvme_tcp 00:10:14.864 rmmod nvme_fabrics 00:10:14.864 rmmod nvme_keyring 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2447928 ']' 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2447928 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2447928 ']' 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2447928 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2447928 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2447928' 00:10:15.152 killing process with pid 2447928 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2447928 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2447928 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.152 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.432 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.432 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.432 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.432 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.432 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:17.335 00:10:17.335 real 0m7.107s 00:10:17.335 user 0m11.447s 00:10:17.335 sys 0m2.292s 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:17.335 ************************************ 00:10:17.335 END TEST nvmf_referrals 00:10:17.335 ************************************ 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.335 ************************************ 00:10:17.335 START TEST nvmf_connect_disconnect 00:10:17.335 ************************************ 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:17.335 * Looking for test storage... 00:10:17.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:10:17.335 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:17.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.595 --rc genhtml_branch_coverage=1 00:10:17.595 --rc genhtml_function_coverage=1 00:10:17.595 --rc genhtml_legend=1 00:10:17.595 --rc geninfo_all_blocks=1 00:10:17.595 --rc geninfo_unexecuted_blocks=1 00:10:17.595 00:10:17.595 ' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:17.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.595 --rc genhtml_branch_coverage=1 00:10:17.595 --rc genhtml_function_coverage=1 00:10:17.595 --rc genhtml_legend=1 00:10:17.595 --rc geninfo_all_blocks=1 00:10:17.595 --rc geninfo_unexecuted_blocks=1 00:10:17.595 00:10:17.595 ' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:17.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.595 --rc genhtml_branch_coverage=1 00:10:17.595 --rc genhtml_function_coverage=1 00:10:17.595 --rc genhtml_legend=1 00:10:17.595 --rc geninfo_all_blocks=1 00:10:17.595 --rc geninfo_unexecuted_blocks=1 00:10:17.595 00:10:17.595 ' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:17.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.595 --rc genhtml_branch_coverage=1 00:10:17.595 --rc genhtml_function_coverage=1 00:10:17.595 --rc genhtml_legend=1 00:10:17.595 --rc geninfo_all_blocks=1 00:10:17.595 --rc geninfo_unexecuted_blocks=1 00:10:17.595 00:10:17.595 ' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.595 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.596 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:20.131 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:20.131 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.131 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:20.131 Found net devices under 0000:09:00.0: cvl_0_0 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:20.132 Found net devices under 0000:09:00.1: cvl_0_1 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.132 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:20.132 00:10:20.132 --- 10.0.0.2 ping statistics --- 00:10:20.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.132 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:10:20.132 00:10:20.132 --- 10.0.0.1 ping statistics --- 00:10:20.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.132 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2450236 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2450236 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2450236 ']' 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 [2024-11-20 07:12:23.163868] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:10:20.132 [2024-11-20 07:12:23.163974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.132 [2024-11-20 07:12:23.234283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.132 [2024-11-20 07:12:23.290123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.132 [2024-11-20 07:12:23.290175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.132 [2024-11-20 07:12:23.290198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.132 [2024-11-20 07:12:23.290208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.132 [2024-11-20 07:12:23.290218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.132 [2024-11-20 07:12:23.291690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.132 [2024-11-20 07:12:23.291748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.132 [2024-11-20 07:12:23.291808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.132 [2024-11-20 07:12:23.291812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 [2024-11-20 07:12:23.437440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:20.132 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 [2024-11-20 07:12:23.511982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:20.133 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:23.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.256 rmmod nvme_tcp 00:10:34.256 rmmod nvme_fabrics 00:10:34.256 rmmod nvme_keyring 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2450236 ']' 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2450236 00:10:34.256 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2450236 ']' 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2450236 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2450236 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2450236' 00:10:34.257 killing process with pid 2450236 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2450236 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2450236 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.257 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.791 00:10:36.791 real 0m19.050s 00:10:36.791 user 0m57.170s 00:10:36.791 sys 0m3.358s 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:36.791 ************************************ 00:10:36.791 END TEST nvmf_connect_disconnect 00:10:36.791 ************************************ 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.791 ************************************ 00:10:36.791 START TEST nvmf_multitarget 00:10:36.791 ************************************ 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:36.791 * Looking for test storage... 00:10:36.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:36.791 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:36.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.792 --rc genhtml_branch_coverage=1 00:10:36.792 --rc genhtml_function_coverage=1 00:10:36.792 --rc genhtml_legend=1 00:10:36.792 --rc geninfo_all_blocks=1 00:10:36.792 --rc geninfo_unexecuted_blocks=1 00:10:36.792 00:10:36.792 ' 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:36.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.792 --rc genhtml_branch_coverage=1 00:10:36.792 --rc genhtml_function_coverage=1 00:10:36.792 --rc genhtml_legend=1 00:10:36.792 --rc geninfo_all_blocks=1 00:10:36.792 --rc geninfo_unexecuted_blocks=1 00:10:36.792 00:10:36.792 ' 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:36.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.792 --rc genhtml_branch_coverage=1 00:10:36.792 --rc genhtml_function_coverage=1 00:10:36.792 --rc genhtml_legend=1 00:10:36.792 --rc geninfo_all_blocks=1 00:10:36.792 --rc geninfo_unexecuted_blocks=1 00:10:36.792 00:10:36.792 ' 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:36.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.792 --rc genhtml_branch_coverage=1 00:10:36.792 --rc genhtml_function_coverage=1 00:10:36.792 --rc genhtml_legend=1 00:10:36.792 --rc geninfo_all_blocks=1 00:10:36.792 --rc geninfo_unexecuted_blocks=1 00:10:36.792 00:10:36.792 ' 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.792 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.793 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:38.695 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:38.695 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.695 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:38.953 Found net devices under 0000:09:00.0: cvl_0_0 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:38.953 Found net devices under 0000:09:00.1: cvl_0_1 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:10:38.953 00:10:38.953 --- 10.0.0.2 ping statistics --- 00:10:38.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.953 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:10:38.953 00:10:38.953 --- 10.0.0.1 ping statistics --- 00:10:38.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.953 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2454006 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2454006 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2454006 ']' 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:38.953 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.954 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:38.954 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:38.954 [2024-11-20 07:12:42.335318] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:10:38.954 [2024-11-20 07:12:42.335384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.212 [2024-11-20 07:12:42.404082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.212 [2024-11-20 07:12:42.460461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.212 [2024-11-20 07:12:42.460516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.212 [2024-11-20 07:12:42.460539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.212 [2024-11-20 07:12:42.460550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.212 [2024-11-20 07:12:42.460560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.212 [2024-11-20 07:12:42.462110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.212 [2024-11-20 07:12:42.462181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.212 [2024-11-20 07:12:42.462246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.212 [2024-11-20 07:12:42.462249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:39.212 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:39.469 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:39.469 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:39.469 "nvmf_tgt_1" 00:10:39.469 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:39.727 "nvmf_tgt_2" 00:10:39.727 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:39.727 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:39.727 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:39.727 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:39.985 true 00:10:39.985 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:39.985 true 00:10:39.985 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:39.985 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.242 rmmod nvme_tcp 00:10:40.242 rmmod nvme_fabrics 00:10:40.242 rmmod nvme_keyring 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2454006 ']' 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2454006 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2454006 ']' 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2454006 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2454006 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2454006' 00:10:40.242 killing process with pid 2454006 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2454006 00:10:40.242 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2454006 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.500 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.405 00:10:42.405 real 0m6.009s 00:10:42.405 user 0m6.681s 00:10:42.405 sys 0m2.111s 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:42.405 ************************************ 00:10:42.405 END TEST nvmf_multitarget 00:10:42.405 ************************************ 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:42.405 ************************************ 00:10:42.405 START TEST nvmf_rpc 00:10:42.405 ************************************ 00:10:42.405 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:42.663 * Looking for test storage... 00:10:42.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.663 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:42.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.664 --rc genhtml_branch_coverage=1 00:10:42.664 --rc genhtml_function_coverage=1 00:10:42.664 --rc genhtml_legend=1 00:10:42.664 --rc geninfo_all_blocks=1 00:10:42.664 --rc geninfo_unexecuted_blocks=1 00:10:42.664 00:10:42.664 ' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:42.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.664 --rc genhtml_branch_coverage=1 00:10:42.664 --rc genhtml_function_coverage=1 00:10:42.664 --rc genhtml_legend=1 00:10:42.664 --rc geninfo_all_blocks=1 00:10:42.664 --rc geninfo_unexecuted_blocks=1 00:10:42.664 00:10:42.664 ' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:42.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.664 --rc genhtml_branch_coverage=1 00:10:42.664 --rc genhtml_function_coverage=1 00:10:42.664 --rc genhtml_legend=1 00:10:42.664 --rc geninfo_all_blocks=1 00:10:42.664 --rc geninfo_unexecuted_blocks=1 00:10:42.664 00:10:42.664 ' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:42.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.664 --rc genhtml_branch_coverage=1 00:10:42.664 --rc genhtml_function_coverage=1 00:10:42.664 --rc genhtml_legend=1 00:10:42.664 --rc geninfo_all_blocks=1 00:10:42.664 --rc geninfo_unexecuted_blocks=1 00:10:42.664 00:10:42.664 ' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.664 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:42.665 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:42.665 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.665 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:45.196 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:45.196 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:45.196 Found net devices under 0000:09:00.0: cvl_0_0 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:45.196 Found net devices under 0000:09:00.1: cvl_0_1 00:10:45.196 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:10:45.197 00:10:45.197 --- 10.0.0.2 ping statistics --- 00:10:45.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.197 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:10:45.197 00:10:45.197 --- 10.0.0.1 ping statistics --- 00:10:45.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.197 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2456111 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2456111 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2456111 ']' 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 [2024-11-20 07:12:48.299767] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:10:45.197 [2024-11-20 07:12:48.299864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.197 [2024-11-20 07:12:48.371406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.197 [2024-11-20 07:12:48.426789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.197 [2024-11-20 07:12:48.426856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.197 [2024-11-20 07:12:48.426869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.197 [2024-11-20 07:12:48.426879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.197 [2024-11-20 07:12:48.426895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.197 [2024-11-20 07:12:48.428486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.197 [2024-11-20 07:12:48.428625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.197 [2024-11-20 07:12:48.428687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.197 [2024-11-20 07:12:48.428691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.197 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:45.197 "tick_rate": 2700000000, 00:10:45.197 "poll_groups": [ 00:10:45.197 { 00:10:45.197 "name": "nvmf_tgt_poll_group_000", 00:10:45.197 "admin_qpairs": 0, 00:10:45.197 "io_qpairs": 0, 00:10:45.197 "current_admin_qpairs": 0, 00:10:45.197 "current_io_qpairs": 0, 00:10:45.197 "pending_bdev_io": 0, 00:10:45.197 "completed_nvme_io": 0, 00:10:45.197 "transports": [] 00:10:45.197 }, 00:10:45.197 { 00:10:45.197 "name": "nvmf_tgt_poll_group_001", 00:10:45.197 "admin_qpairs": 0, 00:10:45.197 "io_qpairs": 0, 00:10:45.197 "current_admin_qpairs": 0, 00:10:45.197 "current_io_qpairs": 0, 00:10:45.198 "pending_bdev_io": 0, 00:10:45.198 "completed_nvme_io": 0, 00:10:45.198 "transports": [] 00:10:45.198 }, 00:10:45.198 { 00:10:45.198 "name": "nvmf_tgt_poll_group_002", 00:10:45.198 "admin_qpairs": 0, 00:10:45.198 "io_qpairs": 0, 00:10:45.198 "current_admin_qpairs": 0, 00:10:45.198 "current_io_qpairs": 0, 00:10:45.198 "pending_bdev_io": 0, 00:10:45.198 "completed_nvme_io": 0, 00:10:45.198 "transports": [] 00:10:45.198 }, 00:10:45.198 { 00:10:45.198 "name": "nvmf_tgt_poll_group_003", 00:10:45.198 "admin_qpairs": 0, 00:10:45.198 "io_qpairs": 0, 00:10:45.198 "current_admin_qpairs": 0, 00:10:45.198 "current_io_qpairs": 0, 00:10:45.198 "pending_bdev_io": 0, 00:10:45.198 "completed_nvme_io": 0, 00:10:45.198 "transports": [] 00:10:45.198 } 00:10:45.198 ] 00:10:45.198 }' 00:10:45.198 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:45.198 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:45.198 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:45.198 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 [2024-11-20 07:12:48.671045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:45.457 "tick_rate": 2700000000, 00:10:45.457 "poll_groups": [ 00:10:45.457 { 00:10:45.457 "name": "nvmf_tgt_poll_group_000", 00:10:45.457 "admin_qpairs": 0, 00:10:45.457 "io_qpairs": 0, 00:10:45.457 "current_admin_qpairs": 0, 00:10:45.457 "current_io_qpairs": 0, 00:10:45.457 "pending_bdev_io": 0, 00:10:45.457 "completed_nvme_io": 0, 00:10:45.457 "transports": [ 00:10:45.457 { 00:10:45.457 "trtype": "TCP" 00:10:45.457 } 00:10:45.457 ] 00:10:45.457 }, 00:10:45.457 { 00:10:45.457 "name": "nvmf_tgt_poll_group_001", 00:10:45.457 "admin_qpairs": 0, 00:10:45.457 "io_qpairs": 0, 00:10:45.457 "current_admin_qpairs": 0, 00:10:45.457 "current_io_qpairs": 0, 00:10:45.457 "pending_bdev_io": 0, 00:10:45.457 "completed_nvme_io": 0, 00:10:45.457 "transports": [ 00:10:45.457 { 00:10:45.457 "trtype": "TCP" 00:10:45.457 } 00:10:45.457 ] 00:10:45.457 }, 00:10:45.457 { 00:10:45.457 "name": "nvmf_tgt_poll_group_002", 00:10:45.457 "admin_qpairs": 0, 00:10:45.457 "io_qpairs": 0, 00:10:45.457 "current_admin_qpairs": 0, 00:10:45.457 "current_io_qpairs": 0, 00:10:45.457 "pending_bdev_io": 0, 00:10:45.457 "completed_nvme_io": 0, 00:10:45.457 "transports": [ 00:10:45.457 { 00:10:45.457 "trtype": "TCP" 00:10:45.457 } 00:10:45.457 ] 00:10:45.457 }, 00:10:45.457 { 00:10:45.457 "name": "nvmf_tgt_poll_group_003", 00:10:45.457 "admin_qpairs": 0, 00:10:45.457 "io_qpairs": 0, 00:10:45.457 "current_admin_qpairs": 0, 00:10:45.457 "current_io_qpairs": 0, 00:10:45.457 "pending_bdev_io": 0, 00:10:45.457 "completed_nvme_io": 0, 00:10:45.457 "transports": [ 00:10:45.457 { 00:10:45.457 "trtype": "TCP" 00:10:45.457 } 00:10:45.457 ] 00:10:45.457 } 00:10:45.457 ] 00:10:45.457 }' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 Malloc1 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.457 [2024-11-20 07:12:48.833546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.457 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:45.458 [2024-11-20 07:12:48.856179] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:45.458 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:45.458 could not add new controller: failed to write to nvme-fabrics device 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.458 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.280 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.280 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:46.280 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.280 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:46.280 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.193 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.450 [2024-11-20 07:12:51.645722] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:48.450 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:48.450 could not add new controller: failed to write to nvme-fabrics device 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.450 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.014 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.014 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:49.014 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.014 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:49.014 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.540 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.541 [2024-11-20 07:12:54.526903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.541 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.106 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.106 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:52.106 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.106 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:52.106 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.005 [2024-11-20 07:12:57.392688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.005 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.938 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.938 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:54.938 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.938 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:54.938 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.948 [2024-11-20 07:13:00.149897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.948 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.514 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.515 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:10:57.515 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.515 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:57.515 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.042 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.043 [2024-11-20 07:13:02.976587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.043 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.301 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.301 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:00.301 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.301 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:00.301 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 [2024-11-20 07:13:05.859156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.828 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.393 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.393 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:03.393 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.393 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:03.393 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.292 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 [2024-11-20 07:13:08.695745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.293 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 [2024-11-20 07:13:08.743810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 [2024-11-20 07:13:08.791962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.552 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 [2024-11-20 07:13:08.840111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 [2024-11-20 07:13:08.888320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:05.553 "tick_rate": 2700000000, 00:11:05.553 "poll_groups": [ 00:11:05.553 { 00:11:05.553 "name": "nvmf_tgt_poll_group_000", 00:11:05.553 "admin_qpairs": 2, 00:11:05.553 "io_qpairs": 84, 00:11:05.553 "current_admin_qpairs": 0, 00:11:05.553 "current_io_qpairs": 0, 00:11:05.553 "pending_bdev_io": 0, 00:11:05.553 "completed_nvme_io": 134, 00:11:05.553 "transports": [ 00:11:05.553 { 00:11:05.553 "trtype": "TCP" 00:11:05.553 } 00:11:05.553 ] 00:11:05.553 }, 00:11:05.553 { 00:11:05.553 "name": "nvmf_tgt_poll_group_001", 00:11:05.553 "admin_qpairs": 2, 00:11:05.553 "io_qpairs": 84, 00:11:05.553 "current_admin_qpairs": 0, 00:11:05.553 "current_io_qpairs": 0, 00:11:05.553 "pending_bdev_io": 0, 00:11:05.553 "completed_nvme_io": 133, 00:11:05.553 "transports": [ 00:11:05.553 { 00:11:05.553 "trtype": "TCP" 00:11:05.553 } 00:11:05.553 ] 00:11:05.553 }, 00:11:05.553 { 00:11:05.553 "name": "nvmf_tgt_poll_group_002", 00:11:05.553 "admin_qpairs": 1, 00:11:05.553 "io_qpairs": 84, 00:11:05.553 "current_admin_qpairs": 0, 00:11:05.553 "current_io_qpairs": 0, 00:11:05.553 "pending_bdev_io": 0, 00:11:05.553 "completed_nvme_io": 184, 00:11:05.553 "transports": [ 00:11:05.553 { 00:11:05.553 "trtype": "TCP" 00:11:05.553 } 00:11:05.553 ] 00:11:05.553 }, 00:11:05.553 { 00:11:05.553 "name": "nvmf_tgt_poll_group_003", 00:11:05.553 "admin_qpairs": 2, 00:11:05.553 "io_qpairs": 84, 00:11:05.553 "current_admin_qpairs": 0, 00:11:05.553 "current_io_qpairs": 0, 00:11:05.553 "pending_bdev_io": 0, 00:11:05.553 "completed_nvme_io": 235, 00:11:05.553 "transports": [ 00:11:05.553 { 00:11:05.553 "trtype": "TCP" 00:11:05.553 } 00:11:05.553 ] 00:11:05.553 } 00:11:05.553 ] 00:11:05.553 }' 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:05.553 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:05.812 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:05.812 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:05.812 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:05.812 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:05.812 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.812 rmmod nvme_tcp 00:11:05.812 rmmod nvme_fabrics 00:11:05.812 rmmod nvme_keyring 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2456111 ']' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2456111 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2456111 ']' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2456111 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2456111 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2456111' 00:11:05.812 killing process with pid 2456111 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2456111 00:11:05.812 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2456111 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.071 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.605 00:11:08.605 real 0m25.596s 00:11:08.605 user 1m23.176s 00:11:08.605 sys 0m4.165s 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.605 ************************************ 00:11:08.605 END TEST nvmf_rpc 00:11:08.605 ************************************ 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.605 ************************************ 00:11:08.605 START TEST nvmf_invalid 00:11:08.605 ************************************ 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:08.605 * Looking for test storage... 00:11:08.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.605 --rc genhtml_branch_coverage=1 00:11:08.605 --rc genhtml_function_coverage=1 00:11:08.605 --rc genhtml_legend=1 00:11:08.605 --rc geninfo_all_blocks=1 00:11:08.605 --rc geninfo_unexecuted_blocks=1 00:11:08.605 00:11:08.605 ' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.605 --rc genhtml_branch_coverage=1 00:11:08.605 --rc genhtml_function_coverage=1 00:11:08.605 --rc genhtml_legend=1 00:11:08.605 --rc geninfo_all_blocks=1 00:11:08.605 --rc geninfo_unexecuted_blocks=1 00:11:08.605 00:11:08.605 ' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.605 --rc genhtml_branch_coverage=1 00:11:08.605 --rc genhtml_function_coverage=1 00:11:08.605 --rc genhtml_legend=1 00:11:08.605 --rc geninfo_all_blocks=1 00:11:08.605 --rc geninfo_unexecuted_blocks=1 00:11:08.605 00:11:08.605 ' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.605 --rc genhtml_branch_coverage=1 00:11:08.605 --rc genhtml_function_coverage=1 00:11:08.605 --rc genhtml_legend=1 00:11:08.605 --rc geninfo_all_blocks=1 00:11:08.605 --rc geninfo_unexecuted_blocks=1 00:11:08.605 00:11:08.605 ' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.605 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.606 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:10.505 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:10.506 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:10.506 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:10.506 Found net devices under 0000:09:00.0: cvl_0_0 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:10.506 Found net devices under 0000:09:00.1: cvl_0_1 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:11:10.506 00:11:10.506 --- 10.0.0.2 ping statistics --- 00:11:10.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.506 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:11:10.506 00:11:10.506 --- 10.0.0.1 ping statistics --- 00:11:10.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.506 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:10.506 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2460733 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2460733 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2460733 ']' 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.507 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:10.765 [2024-11-20 07:13:13.940939] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:11:10.765 [2024-11-20 07:13:13.941015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.765 [2024-11-20 07:13:14.019893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.765 [2024-11-20 07:13:14.081745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.765 [2024-11-20 07:13:14.081803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.765 [2024-11-20 07:13:14.081840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.765 [2024-11-20 07:13:14.081856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.765 [2024-11-20 07:13:14.081871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.765 [2024-11-20 07:13:14.083554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.765 [2024-11-20 07:13:14.083613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.765 [2024-11-20 07:13:14.083636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.765 [2024-11-20 07:13:14.083643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.022 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:11.022 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:11:11.022 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.023 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.023 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:11.023 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.023 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:11.023 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6603 00:11:11.280 [2024-11-20 07:13:14.542840] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:11.280 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:11.280 { 00:11:11.280 "nqn": "nqn.2016-06.io.spdk:cnode6603", 00:11:11.280 "tgt_name": "foobar", 00:11:11.280 "method": "nvmf_create_subsystem", 00:11:11.280 "req_id": 1 00:11:11.280 } 00:11:11.280 Got JSON-RPC error response 00:11:11.280 response: 00:11:11.280 { 00:11:11.280 "code": -32603, 00:11:11.280 "message": "Unable to find target foobar" 00:11:11.280 }' 00:11:11.280 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:11.280 { 00:11:11.280 "nqn": "nqn.2016-06.io.spdk:cnode6603", 00:11:11.280 "tgt_name": "foobar", 00:11:11.280 "method": "nvmf_create_subsystem", 00:11:11.280 "req_id": 1 00:11:11.280 } 00:11:11.280 Got JSON-RPC error response 00:11:11.280 response: 00:11:11.280 { 00:11:11.280 "code": -32603, 00:11:11.280 "message": "Unable to find target foobar" 00:11:11.280 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:11.280 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:11.280 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31956 00:11:11.539 [2024-11-20 07:13:14.831826] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31956: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:11.539 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:11.539 { 00:11:11.539 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:11:11.539 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:11.539 "method": "nvmf_create_subsystem", 00:11:11.539 "req_id": 1 00:11:11.539 } 00:11:11.539 Got JSON-RPC error response 00:11:11.539 response: 00:11:11.539 { 00:11:11.539 "code": -32602, 00:11:11.539 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:11.539 }' 00:11:11.539 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:11.539 { 00:11:11.539 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:11:11.539 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:11.539 "method": "nvmf_create_subsystem", 00:11:11.539 "req_id": 1 00:11:11.539 } 00:11:11.539 Got JSON-RPC error response 00:11:11.539 response: 00:11:11.539 { 00:11:11.539 "code": -32602, 00:11:11.539 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:11.539 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:11.539 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:11.539 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28525 00:11:11.797 [2024-11-20 07:13:15.104730] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28525: invalid model number 'SPDK_Controller' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:11.797 { 00:11:11.797 "nqn": "nqn.2016-06.io.spdk:cnode28525", 00:11:11.797 "model_number": "SPDK_Controller\u001f", 00:11:11.797 "method": "nvmf_create_subsystem", 00:11:11.797 "req_id": 1 00:11:11.797 } 00:11:11.797 Got JSON-RPC error response 00:11:11.797 response: 00:11:11.797 { 00:11:11.797 "code": -32602, 00:11:11.797 "message": "Invalid MN SPDK_Controller\u001f" 00:11:11.797 }' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:11.797 { 00:11:11.797 "nqn": "nqn.2016-06.io.spdk:cnode28525", 00:11:11.797 "model_number": "SPDK_Controller\u001f", 00:11:11.797 "method": "nvmf_create_subsystem", 00:11:11.797 "req_id": 1 00:11:11.797 } 00:11:11.797 Got JSON-RPC error response 00:11:11.797 response: 00:11:11.797 { 00:11:11.797 "code": -32602, 00:11:11.797 "message": "Invalid MN SPDK_Controller\u001f" 00:11:11.797 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:11.797 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '{<5UV-wq]iEWm*V\hMFb,' 00:11:11.798 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '{<5UV-wq]iEWm*V\hMFb,' nqn.2016-06.io.spdk:cnode19727 00:11:12.056 [2024-11-20 07:13:15.449784] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19727: invalid serial number '{<5UV-wq]iEWm*V\hMFb,' 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:12.056 { 00:11:12.056 "nqn": "nqn.2016-06.io.spdk:cnode19727", 00:11:12.056 "serial_number": "{<5UV-wq]iEWm*V\\hMFb,", 00:11:12.056 "method": "nvmf_create_subsystem", 00:11:12.056 "req_id": 1 00:11:12.056 } 00:11:12.056 Got JSON-RPC error response 00:11:12.056 response: 00:11:12.056 { 00:11:12.056 "code": -32602, 00:11:12.056 "message": "Invalid SN {<5UV-wq]iEWm*V\\hMFb," 00:11:12.056 }' 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:12.056 { 00:11:12.056 "nqn": "nqn.2016-06.io.spdk:cnode19727", 00:11:12.056 "serial_number": "{<5UV-wq]iEWm*V\\hMFb,", 00:11:12.056 "method": "nvmf_create_subsystem", 00:11:12.056 "req_id": 1 00:11:12.056 } 00:11:12.056 Got JSON-RPC error response 00:11:12.056 response: 00:11:12.056 { 00:11:12.056 "code": -32602, 00:11:12.056 "message": "Invalid SN {<5UV-wq]iEWm*V\\hMFb," 00:11:12.056 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:12.056 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:12.314 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:12.315 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:12.316 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:12.316 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:11:12.316 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';)S>_ff4:OPw-T_ff4:OPw-T_ff4:OPw-T_ff4:OPw-T_ff4:OPw-T /dev/null' 00:11:15.481 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.385 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.385 00:11:17.385 real 0m9.337s 00:11:17.385 user 0m22.955s 00:11:17.385 sys 0m2.601s 00:11:17.385 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.385 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:17.385 ************************************ 00:11:17.385 END TEST nvmf_invalid 00:11:17.385 ************************************ 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.644 ************************************ 00:11:17.644 START TEST nvmf_connect_stress 00:11:17.644 ************************************ 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:17.644 * Looking for test storage... 00:11:17.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.644 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:17.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.644 --rc genhtml_branch_coverage=1 00:11:17.644 --rc genhtml_function_coverage=1 00:11:17.644 --rc genhtml_legend=1 00:11:17.644 --rc geninfo_all_blocks=1 00:11:17.644 --rc geninfo_unexecuted_blocks=1 00:11:17.644 00:11:17.644 ' 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:17.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.644 --rc genhtml_branch_coverage=1 00:11:17.644 --rc genhtml_function_coverage=1 00:11:17.644 --rc genhtml_legend=1 00:11:17.644 --rc geninfo_all_blocks=1 00:11:17.644 --rc geninfo_unexecuted_blocks=1 00:11:17.644 00:11:17.644 ' 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:17.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.644 --rc genhtml_branch_coverage=1 00:11:17.644 --rc genhtml_function_coverage=1 00:11:17.644 --rc genhtml_legend=1 00:11:17.644 --rc geninfo_all_blocks=1 00:11:17.644 --rc geninfo_unexecuted_blocks=1 00:11:17.644 00:11:17.644 ' 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:17.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.644 --rc genhtml_branch_coverage=1 00:11:17.644 --rc genhtml_function_coverage=1 00:11:17.644 --rc genhtml_legend=1 00:11:17.644 --rc geninfo_all_blocks=1 00:11:17.644 --rc geninfo_unexecuted_blocks=1 00:11:17.644 00:11:17.644 ' 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.644 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.645 07:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:20.176 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:20.176 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:20.176 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:20.177 Found net devices under 0000:09:00.0: cvl_0_0 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:20.177 Found net devices under 0000:09:00.1: cvl_0_1 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:20.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:11:20.177 00:11:20.177 --- 10.0.0.2 ping statistics --- 00:11:20.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.177 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:20.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:20.177 00:11:20.177 --- 10.0.0.1 ping statistics --- 00:11:20.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.177 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2463385 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2463385 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2463385 ']' 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:20.177 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.177 [2024-11-20 07:13:23.379293] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:11:20.177 [2024-11-20 07:13:23.379384] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.178 [2024-11-20 07:13:23.453054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.178 [2024-11-20 07:13:23.513478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.178 [2024-11-20 07:13:23.513527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.178 [2024-11-20 07:13:23.513542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.178 [2024-11-20 07:13:23.513554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.178 [2024-11-20 07:13:23.513564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.178 [2024-11-20 07:13:23.515065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.178 [2024-11-20 07:13:23.515132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.178 [2024-11-20 07:13:23.515127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 [2024-11-20 07:13:23.682379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 [2024-11-20 07:13:23.699363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.436 NULL1 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2463407 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.436 07:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.694 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.694 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:20.694 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.694 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.694 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.260 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.260 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:21.260 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.260 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.260 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.519 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.519 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:21.519 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.519 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.519 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.783 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.783 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:21.783 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.783 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.783 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.042 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.042 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:22.042 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.042 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.042 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.299 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.300 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:22.300 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.300 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.300 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.865 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.865 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:22.865 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.865 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.865 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.124 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.124 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:23.124 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.124 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.124 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.381 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.381 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:23.381 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.381 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.381 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.639 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.639 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:23.639 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.639 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.639 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.896 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.896 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:23.896 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.896 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.896 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.461 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.461 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:24.462 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.462 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.462 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.719 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.719 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:24.720 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.720 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.720 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.977 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.977 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:24.978 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.978 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.978 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.235 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.235 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:25.235 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.235 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.235 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.494 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.494 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:25.494 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.494 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.494 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.060 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.060 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:26.060 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.060 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.060 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.385 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.385 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:26.385 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.385 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.385 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.643 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.643 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:26.643 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.643 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.643 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.901 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.901 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:26.901 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.901 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.901 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.158 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.158 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:27.158 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.158 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.158 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.416 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.416 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:27.416 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.416 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.416 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.981 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.981 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:27.981 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.981 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.981 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.238 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.238 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:28.238 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.238 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.238 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.496 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.496 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:28.496 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.496 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.496 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.753 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.753 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:28.753 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.753 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.753 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.010 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.010 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:29.010 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.010 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.010 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.649 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.649 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:29.649 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.649 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.649 07:13:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.907 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.907 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:29.907 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.907 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.907 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.164 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.164 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:30.164 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.164 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.164 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.422 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.422 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:30.422 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.422 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.422 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.422 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2463407 00:11:30.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2463407) - No such process 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2463407 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.681 rmmod nvme_tcp 00:11:30.681 rmmod nvme_fabrics 00:11:30.681 rmmod nvme_keyring 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2463385 ']' 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2463385 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2463385 ']' 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2463385 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.681 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2463385 00:11:30.939 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:30.939 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:30.939 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2463385' 00:11:30.939 killing process with pid 2463385 00:11:30.939 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2463385 00:11:30.940 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2463385 00:11:30.940 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.940 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.199 07:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.104 00:11:33.104 real 0m15.562s 00:11:33.104 user 0m38.571s 00:11:33.104 sys 0m6.045s 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 ************************************ 00:11:33.104 END TEST nvmf_connect_stress 00:11:33.104 ************************************ 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 ************************************ 00:11:33.104 START TEST nvmf_fused_ordering 00:11:33.104 ************************************ 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:33.104 * Looking for test storage... 00:11:33.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.104 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.105 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.364 --rc genhtml_branch_coverage=1 00:11:33.364 --rc genhtml_function_coverage=1 00:11:33.364 --rc genhtml_legend=1 00:11:33.364 --rc geninfo_all_blocks=1 00:11:33.364 --rc geninfo_unexecuted_blocks=1 00:11:33.364 00:11:33.364 ' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.364 --rc genhtml_branch_coverage=1 00:11:33.364 --rc genhtml_function_coverage=1 00:11:33.364 --rc genhtml_legend=1 00:11:33.364 --rc geninfo_all_blocks=1 00:11:33.364 --rc geninfo_unexecuted_blocks=1 00:11:33.364 00:11:33.364 ' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.364 --rc genhtml_branch_coverage=1 00:11:33.364 --rc genhtml_function_coverage=1 00:11:33.364 --rc genhtml_legend=1 00:11:33.364 --rc geninfo_all_blocks=1 00:11:33.364 --rc geninfo_unexecuted_blocks=1 00:11:33.364 00:11:33.364 ' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.364 --rc genhtml_branch_coverage=1 00:11:33.364 --rc genhtml_function_coverage=1 00:11:33.364 --rc genhtml_legend=1 00:11:33.364 --rc geninfo_all_blocks=1 00:11:33.364 --rc geninfo_unexecuted_blocks=1 00:11:33.364 00:11:33.364 ' 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.364 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.365 07:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:35.271 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.271 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:35.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:35.272 Found net devices under 0000:09:00.0: cvl_0_0 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:35.272 Found net devices under 0000:09:00.1: cvl_0_1 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:11:35.272 00:11:35.272 --- 10.0.0.2 ping statistics --- 00:11:35.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.272 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:35.272 00:11:35.272 --- 10.0.0.1 ping statistics --- 00:11:35.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.272 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2466599 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2466599 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2466599 ']' 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.272 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.530 [2024-11-20 07:13:38.726497] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:11:35.531 [2024-11-20 07:13:38.726572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.531 [2024-11-20 07:13:38.797268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.531 [2024-11-20 07:13:38.856185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.531 [2024-11-20 07:13:38.856234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.531 [2024-11-20 07:13:38.856262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.531 [2024-11-20 07:13:38.856273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.531 [2024-11-20 07:13:38.856283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.531 [2024-11-20 07:13:38.856943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.789 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.789 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:11:35.789 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.789 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.789 07:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 [2024-11-20 07:13:39.010705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 [2024-11-20 07:13:39.026868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 NULL1 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.789 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:35.789 [2024-11-20 07:13:39.071200] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:11:35.789 [2024-11-20 07:13:39.071234] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466710 ] 00:11:36.354 Attached to nqn.2016-06.io.spdk:cnode1 00:11:36.354 Namespace ID: 1 size: 1GB 00:11:36.354 fused_ordering(0) 00:11:36.354 fused_ordering(1) 00:11:36.354 fused_ordering(2) 00:11:36.354 fused_ordering(3) 00:11:36.354 fused_ordering(4) 00:11:36.354 fused_ordering(5) 00:11:36.354 fused_ordering(6) 00:11:36.354 fused_ordering(7) 00:11:36.354 fused_ordering(8) 00:11:36.354 fused_ordering(9) 00:11:36.354 fused_ordering(10) 00:11:36.354 fused_ordering(11) 00:11:36.354 fused_ordering(12) 00:11:36.354 fused_ordering(13) 00:11:36.354 fused_ordering(14) 00:11:36.354 fused_ordering(15) 00:11:36.354 fused_ordering(16) 00:11:36.354 fused_ordering(17) 00:11:36.354 fused_ordering(18) 00:11:36.354 fused_ordering(19) 00:11:36.354 fused_ordering(20) 00:11:36.354 fused_ordering(21) 00:11:36.354 fused_ordering(22) 00:11:36.354 fused_ordering(23) 00:11:36.354 fused_ordering(24) 00:11:36.354 fused_ordering(25) 00:11:36.354 fused_ordering(26) 00:11:36.354 fused_ordering(27) 00:11:36.354 fused_ordering(28) 00:11:36.354 fused_ordering(29) 00:11:36.354 fused_ordering(30) 00:11:36.354 fused_ordering(31) 00:11:36.354 fused_ordering(32) 00:11:36.354 fused_ordering(33) 00:11:36.354 fused_ordering(34) 00:11:36.354 fused_ordering(35) 00:11:36.354 fused_ordering(36) 00:11:36.354 fused_ordering(37) 00:11:36.354 fused_ordering(38) 00:11:36.354 fused_ordering(39) 00:11:36.354 fused_ordering(40) 00:11:36.354 fused_ordering(41) 00:11:36.354 fused_ordering(42) 00:11:36.354 fused_ordering(43) 00:11:36.354 fused_ordering(44) 00:11:36.354 fused_ordering(45) 00:11:36.354 fused_ordering(46) 00:11:36.354 fused_ordering(47) 00:11:36.354 fused_ordering(48) 00:11:36.354 fused_ordering(49) 00:11:36.354 fused_ordering(50) 00:11:36.354 fused_ordering(51) 00:11:36.354 fused_ordering(52) 00:11:36.354 fused_ordering(53) 00:11:36.354 fused_ordering(54) 00:11:36.354 fused_ordering(55) 00:11:36.354 fused_ordering(56) 00:11:36.354 fused_ordering(57) 00:11:36.354 fused_ordering(58) 00:11:36.354 fused_ordering(59) 00:11:36.354 fused_ordering(60) 00:11:36.354 fused_ordering(61) 00:11:36.354 fused_ordering(62) 00:11:36.354 fused_ordering(63) 00:11:36.354 fused_ordering(64) 00:11:36.354 fused_ordering(65) 00:11:36.354 fused_ordering(66) 00:11:36.354 fused_ordering(67) 00:11:36.354 fused_ordering(68) 00:11:36.354 fused_ordering(69) 00:11:36.354 fused_ordering(70) 00:11:36.354 fused_ordering(71) 00:11:36.354 fused_ordering(72) 00:11:36.354 fused_ordering(73) 00:11:36.354 fused_ordering(74) 00:11:36.354 fused_ordering(75) 00:11:36.354 fused_ordering(76) 00:11:36.354 fused_ordering(77) 00:11:36.354 fused_ordering(78) 00:11:36.354 fused_ordering(79) 00:11:36.354 fused_ordering(80) 00:11:36.354 fused_ordering(81) 00:11:36.354 fused_ordering(82) 00:11:36.354 fused_ordering(83) 00:11:36.354 fused_ordering(84) 00:11:36.354 fused_ordering(85) 00:11:36.354 fused_ordering(86) 00:11:36.354 fused_ordering(87) 00:11:36.354 fused_ordering(88) 00:11:36.354 fused_ordering(89) 00:11:36.354 fused_ordering(90) 00:11:36.354 fused_ordering(91) 00:11:36.354 fused_ordering(92) 00:11:36.354 fused_ordering(93) 00:11:36.354 fused_ordering(94) 00:11:36.354 fused_ordering(95) 00:11:36.354 fused_ordering(96) 00:11:36.354 fused_ordering(97) 00:11:36.354 fused_ordering(98) 00:11:36.354 fused_ordering(99) 00:11:36.354 fused_ordering(100) 00:11:36.354 fused_ordering(101) 00:11:36.354 fused_ordering(102) 00:11:36.354 fused_ordering(103) 00:11:36.354 fused_ordering(104) 00:11:36.354 fused_ordering(105) 00:11:36.354 fused_ordering(106) 00:11:36.354 fused_ordering(107) 00:11:36.354 fused_ordering(108) 00:11:36.354 fused_ordering(109) 00:11:36.354 fused_ordering(110) 00:11:36.354 fused_ordering(111) 00:11:36.354 fused_ordering(112) 00:11:36.354 fused_ordering(113) 00:11:36.354 fused_ordering(114) 00:11:36.354 fused_ordering(115) 00:11:36.354 fused_ordering(116) 00:11:36.354 fused_ordering(117) 00:11:36.354 fused_ordering(118) 00:11:36.354 fused_ordering(119) 00:11:36.354 fused_ordering(120) 00:11:36.354 fused_ordering(121) 00:11:36.354 fused_ordering(122) 00:11:36.354 fused_ordering(123) 00:11:36.354 fused_ordering(124) 00:11:36.354 fused_ordering(125) 00:11:36.354 fused_ordering(126) 00:11:36.354 fused_ordering(127) 00:11:36.354 fused_ordering(128) 00:11:36.354 fused_ordering(129) 00:11:36.354 fused_ordering(130) 00:11:36.354 fused_ordering(131) 00:11:36.354 fused_ordering(132) 00:11:36.354 fused_ordering(133) 00:11:36.354 fused_ordering(134) 00:11:36.354 fused_ordering(135) 00:11:36.354 fused_ordering(136) 00:11:36.354 fused_ordering(137) 00:11:36.354 fused_ordering(138) 00:11:36.354 fused_ordering(139) 00:11:36.354 fused_ordering(140) 00:11:36.354 fused_ordering(141) 00:11:36.354 fused_ordering(142) 00:11:36.354 fused_ordering(143) 00:11:36.354 fused_ordering(144) 00:11:36.354 fused_ordering(145) 00:11:36.354 fused_ordering(146) 00:11:36.354 fused_ordering(147) 00:11:36.354 fused_ordering(148) 00:11:36.354 fused_ordering(149) 00:11:36.354 fused_ordering(150) 00:11:36.354 fused_ordering(151) 00:11:36.354 fused_ordering(152) 00:11:36.354 fused_ordering(153) 00:11:36.354 fused_ordering(154) 00:11:36.354 fused_ordering(155) 00:11:36.354 fused_ordering(156) 00:11:36.354 fused_ordering(157) 00:11:36.354 fused_ordering(158) 00:11:36.354 fused_ordering(159) 00:11:36.354 fused_ordering(160) 00:11:36.354 fused_ordering(161) 00:11:36.354 fused_ordering(162) 00:11:36.354 fused_ordering(163) 00:11:36.354 fused_ordering(164) 00:11:36.354 fused_ordering(165) 00:11:36.354 fused_ordering(166) 00:11:36.354 fused_ordering(167) 00:11:36.354 fused_ordering(168) 00:11:36.354 fused_ordering(169) 00:11:36.354 fused_ordering(170) 00:11:36.354 fused_ordering(171) 00:11:36.354 fused_ordering(172) 00:11:36.354 fused_ordering(173) 00:11:36.354 fused_ordering(174) 00:11:36.355 fused_ordering(175) 00:11:36.355 fused_ordering(176) 00:11:36.355 fused_ordering(177) 00:11:36.355 fused_ordering(178) 00:11:36.355 fused_ordering(179) 00:11:36.355 fused_ordering(180) 00:11:36.355 fused_ordering(181) 00:11:36.355 fused_ordering(182) 00:11:36.355 fused_ordering(183) 00:11:36.355 fused_ordering(184) 00:11:36.355 fused_ordering(185) 00:11:36.355 fused_ordering(186) 00:11:36.355 fused_ordering(187) 00:11:36.355 fused_ordering(188) 00:11:36.355 fused_ordering(189) 00:11:36.355 fused_ordering(190) 00:11:36.355 fused_ordering(191) 00:11:36.355 fused_ordering(192) 00:11:36.355 fused_ordering(193) 00:11:36.355 fused_ordering(194) 00:11:36.355 fused_ordering(195) 00:11:36.355 fused_ordering(196) 00:11:36.355 fused_ordering(197) 00:11:36.355 fused_ordering(198) 00:11:36.355 fused_ordering(199) 00:11:36.355 fused_ordering(200) 00:11:36.355 fused_ordering(201) 00:11:36.355 fused_ordering(202) 00:11:36.355 fused_ordering(203) 00:11:36.355 fused_ordering(204) 00:11:36.355 fused_ordering(205) 00:11:36.612 fused_ordering(206) 00:11:36.612 fused_ordering(207) 00:11:36.612 fused_ordering(208) 00:11:36.612 fused_ordering(209) 00:11:36.612 fused_ordering(210) 00:11:36.612 fused_ordering(211) 00:11:36.612 fused_ordering(212) 00:11:36.612 fused_ordering(213) 00:11:36.612 fused_ordering(214) 00:11:36.612 fused_ordering(215) 00:11:36.612 fused_ordering(216) 00:11:36.612 fused_ordering(217) 00:11:36.612 fused_ordering(218) 00:11:36.612 fused_ordering(219) 00:11:36.612 fused_ordering(220) 00:11:36.612 fused_ordering(221) 00:11:36.612 fused_ordering(222) 00:11:36.612 fused_ordering(223) 00:11:36.612 fused_ordering(224) 00:11:36.612 fused_ordering(225) 00:11:36.612 fused_ordering(226) 00:11:36.612 fused_ordering(227) 00:11:36.612 fused_ordering(228) 00:11:36.612 fused_ordering(229) 00:11:36.612 fused_ordering(230) 00:11:36.612 fused_ordering(231) 00:11:36.612 fused_ordering(232) 00:11:36.612 fused_ordering(233) 00:11:36.612 fused_ordering(234) 00:11:36.612 fused_ordering(235) 00:11:36.612 fused_ordering(236) 00:11:36.612 fused_ordering(237) 00:11:36.612 fused_ordering(238) 00:11:36.612 fused_ordering(239) 00:11:36.612 fused_ordering(240) 00:11:36.612 fused_ordering(241) 00:11:36.612 fused_ordering(242) 00:11:36.612 fused_ordering(243) 00:11:36.612 fused_ordering(244) 00:11:36.612 fused_ordering(245) 00:11:36.612 fused_ordering(246) 00:11:36.613 fused_ordering(247) 00:11:36.613 fused_ordering(248) 00:11:36.613 fused_ordering(249) 00:11:36.613 fused_ordering(250) 00:11:36.613 fused_ordering(251) 00:11:36.613 fused_ordering(252) 00:11:36.613 fused_ordering(253) 00:11:36.613 fused_ordering(254) 00:11:36.613 fused_ordering(255) 00:11:36.613 fused_ordering(256) 00:11:36.613 fused_ordering(257) 00:11:36.613 fused_ordering(258) 00:11:36.613 fused_ordering(259) 00:11:36.613 fused_ordering(260) 00:11:36.613 fused_ordering(261) 00:11:36.613 fused_ordering(262) 00:11:36.613 fused_ordering(263) 00:11:36.613 fused_ordering(264) 00:11:36.613 fused_ordering(265) 00:11:36.613 fused_ordering(266) 00:11:36.613 fused_ordering(267) 00:11:36.613 fused_ordering(268) 00:11:36.613 fused_ordering(269) 00:11:36.613 fused_ordering(270) 00:11:36.613 fused_ordering(271) 00:11:36.613 fused_ordering(272) 00:11:36.613 fused_ordering(273) 00:11:36.613 fused_ordering(274) 00:11:36.613 fused_ordering(275) 00:11:36.613 fused_ordering(276) 00:11:36.613 fused_ordering(277) 00:11:36.613 fused_ordering(278) 00:11:36.613 fused_ordering(279) 00:11:36.613 fused_ordering(280) 00:11:36.613 fused_ordering(281) 00:11:36.613 fused_ordering(282) 00:11:36.613 fused_ordering(283) 00:11:36.613 fused_ordering(284) 00:11:36.613 fused_ordering(285) 00:11:36.613 fused_ordering(286) 00:11:36.613 fused_ordering(287) 00:11:36.613 fused_ordering(288) 00:11:36.613 fused_ordering(289) 00:11:36.613 fused_ordering(290) 00:11:36.613 fused_ordering(291) 00:11:36.613 fused_ordering(292) 00:11:36.613 fused_ordering(293) 00:11:36.613 fused_ordering(294) 00:11:36.613 fused_ordering(295) 00:11:36.613 fused_ordering(296) 00:11:36.613 fused_ordering(297) 00:11:36.613 fused_ordering(298) 00:11:36.613 fused_ordering(299) 00:11:36.613 fused_ordering(300) 00:11:36.613 fused_ordering(301) 00:11:36.613 fused_ordering(302) 00:11:36.613 fused_ordering(303) 00:11:36.613 fused_ordering(304) 00:11:36.613 fused_ordering(305) 00:11:36.613 fused_ordering(306) 00:11:36.613 fused_ordering(307) 00:11:36.613 fused_ordering(308) 00:11:36.613 fused_ordering(309) 00:11:36.613 fused_ordering(310) 00:11:36.613 fused_ordering(311) 00:11:36.613 fused_ordering(312) 00:11:36.613 fused_ordering(313) 00:11:36.613 fused_ordering(314) 00:11:36.613 fused_ordering(315) 00:11:36.613 fused_ordering(316) 00:11:36.613 fused_ordering(317) 00:11:36.613 fused_ordering(318) 00:11:36.613 fused_ordering(319) 00:11:36.613 fused_ordering(320) 00:11:36.613 fused_ordering(321) 00:11:36.613 fused_ordering(322) 00:11:36.613 fused_ordering(323) 00:11:36.613 fused_ordering(324) 00:11:36.613 fused_ordering(325) 00:11:36.613 fused_ordering(326) 00:11:36.613 fused_ordering(327) 00:11:36.613 fused_ordering(328) 00:11:36.613 fused_ordering(329) 00:11:36.613 fused_ordering(330) 00:11:36.613 fused_ordering(331) 00:11:36.613 fused_ordering(332) 00:11:36.613 fused_ordering(333) 00:11:36.613 fused_ordering(334) 00:11:36.613 fused_ordering(335) 00:11:36.613 fused_ordering(336) 00:11:36.613 fused_ordering(337) 00:11:36.613 fused_ordering(338) 00:11:36.613 fused_ordering(339) 00:11:36.613 fused_ordering(340) 00:11:36.613 fused_ordering(341) 00:11:36.613 fused_ordering(342) 00:11:36.613 fused_ordering(343) 00:11:36.613 fused_ordering(344) 00:11:36.613 fused_ordering(345) 00:11:36.613 fused_ordering(346) 00:11:36.613 fused_ordering(347) 00:11:36.613 fused_ordering(348) 00:11:36.613 fused_ordering(349) 00:11:36.613 fused_ordering(350) 00:11:36.613 fused_ordering(351) 00:11:36.613 fused_ordering(352) 00:11:36.613 fused_ordering(353) 00:11:36.613 fused_ordering(354) 00:11:36.613 fused_ordering(355) 00:11:36.613 fused_ordering(356) 00:11:36.613 fused_ordering(357) 00:11:36.613 fused_ordering(358) 00:11:36.613 fused_ordering(359) 00:11:36.613 fused_ordering(360) 00:11:36.613 fused_ordering(361) 00:11:36.613 fused_ordering(362) 00:11:36.613 fused_ordering(363) 00:11:36.613 fused_ordering(364) 00:11:36.613 fused_ordering(365) 00:11:36.613 fused_ordering(366) 00:11:36.613 fused_ordering(367) 00:11:36.613 fused_ordering(368) 00:11:36.613 fused_ordering(369) 00:11:36.613 fused_ordering(370) 00:11:36.613 fused_ordering(371) 00:11:36.613 fused_ordering(372) 00:11:36.613 fused_ordering(373) 00:11:36.613 fused_ordering(374) 00:11:36.613 fused_ordering(375) 00:11:36.613 fused_ordering(376) 00:11:36.613 fused_ordering(377) 00:11:36.613 fused_ordering(378) 00:11:36.613 fused_ordering(379) 00:11:36.613 fused_ordering(380) 00:11:36.613 fused_ordering(381) 00:11:36.613 fused_ordering(382) 00:11:36.613 fused_ordering(383) 00:11:36.613 fused_ordering(384) 00:11:36.613 fused_ordering(385) 00:11:36.613 fused_ordering(386) 00:11:36.613 fused_ordering(387) 00:11:36.613 fused_ordering(388) 00:11:36.613 fused_ordering(389) 00:11:36.613 fused_ordering(390) 00:11:36.613 fused_ordering(391) 00:11:36.613 fused_ordering(392) 00:11:36.613 fused_ordering(393) 00:11:36.613 fused_ordering(394) 00:11:36.613 fused_ordering(395) 00:11:36.613 fused_ordering(396) 00:11:36.613 fused_ordering(397) 00:11:36.613 fused_ordering(398) 00:11:36.613 fused_ordering(399) 00:11:36.613 fused_ordering(400) 00:11:36.613 fused_ordering(401) 00:11:36.613 fused_ordering(402) 00:11:36.613 fused_ordering(403) 00:11:36.613 fused_ordering(404) 00:11:36.613 fused_ordering(405) 00:11:36.613 fused_ordering(406) 00:11:36.613 fused_ordering(407) 00:11:36.613 fused_ordering(408) 00:11:36.613 fused_ordering(409) 00:11:36.613 fused_ordering(410) 00:11:36.871 fused_ordering(411) 00:11:36.871 fused_ordering(412) 00:11:36.871 fused_ordering(413) 00:11:36.871 fused_ordering(414) 00:11:36.871 fused_ordering(415) 00:11:36.871 fused_ordering(416) 00:11:36.871 fused_ordering(417) 00:11:36.871 fused_ordering(418) 00:11:36.871 fused_ordering(419) 00:11:36.871 fused_ordering(420) 00:11:36.871 fused_ordering(421) 00:11:36.871 fused_ordering(422) 00:11:36.871 fused_ordering(423) 00:11:36.871 fused_ordering(424) 00:11:36.872 fused_ordering(425) 00:11:36.872 fused_ordering(426) 00:11:36.872 fused_ordering(427) 00:11:36.872 fused_ordering(428) 00:11:36.872 fused_ordering(429) 00:11:36.872 fused_ordering(430) 00:11:36.872 fused_ordering(431) 00:11:36.872 fused_ordering(432) 00:11:36.872 fused_ordering(433) 00:11:36.872 fused_ordering(434) 00:11:36.872 fused_ordering(435) 00:11:36.872 fused_ordering(436) 00:11:36.872 fused_ordering(437) 00:11:36.872 fused_ordering(438) 00:11:36.872 fused_ordering(439) 00:11:36.872 fused_ordering(440) 00:11:36.872 fused_ordering(441) 00:11:36.872 fused_ordering(442) 00:11:36.872 fused_ordering(443) 00:11:36.872 fused_ordering(444) 00:11:36.872 fused_ordering(445) 00:11:36.872 fused_ordering(446) 00:11:36.872 fused_ordering(447) 00:11:36.872 fused_ordering(448) 00:11:36.872 fused_ordering(449) 00:11:36.872 fused_ordering(450) 00:11:36.872 fused_ordering(451) 00:11:36.872 fused_ordering(452) 00:11:36.872 fused_ordering(453) 00:11:36.872 fused_ordering(454) 00:11:36.872 fused_ordering(455) 00:11:36.872 fused_ordering(456) 00:11:36.872 fused_ordering(457) 00:11:36.872 fused_ordering(458) 00:11:36.872 fused_ordering(459) 00:11:36.872 fused_ordering(460) 00:11:36.872 fused_ordering(461) 00:11:36.872 fused_ordering(462) 00:11:36.872 fused_ordering(463) 00:11:36.872 fused_ordering(464) 00:11:36.872 fused_ordering(465) 00:11:36.872 fused_ordering(466) 00:11:36.872 fused_ordering(467) 00:11:36.872 fused_ordering(468) 00:11:36.872 fused_ordering(469) 00:11:36.872 fused_ordering(470) 00:11:36.872 fused_ordering(471) 00:11:36.872 fused_ordering(472) 00:11:36.872 fused_ordering(473) 00:11:36.872 fused_ordering(474) 00:11:36.872 fused_ordering(475) 00:11:36.872 fused_ordering(476) 00:11:36.872 fused_ordering(477) 00:11:36.872 fused_ordering(478) 00:11:36.872 fused_ordering(479) 00:11:36.872 fused_ordering(480) 00:11:36.872 fused_ordering(481) 00:11:36.872 fused_ordering(482) 00:11:36.872 fused_ordering(483) 00:11:36.872 fused_ordering(484) 00:11:36.872 fused_ordering(485) 00:11:36.872 fused_ordering(486) 00:11:36.872 fused_ordering(487) 00:11:36.872 fused_ordering(488) 00:11:36.872 fused_ordering(489) 00:11:36.872 fused_ordering(490) 00:11:36.872 fused_ordering(491) 00:11:36.872 fused_ordering(492) 00:11:36.872 fused_ordering(493) 00:11:36.872 fused_ordering(494) 00:11:36.872 fused_ordering(495) 00:11:36.872 fused_ordering(496) 00:11:36.872 fused_ordering(497) 00:11:36.872 fused_ordering(498) 00:11:36.872 fused_ordering(499) 00:11:36.872 fused_ordering(500) 00:11:36.872 fused_ordering(501) 00:11:36.872 fused_ordering(502) 00:11:36.872 fused_ordering(503) 00:11:36.872 fused_ordering(504) 00:11:36.872 fused_ordering(505) 00:11:36.872 fused_ordering(506) 00:11:36.872 fused_ordering(507) 00:11:36.872 fused_ordering(508) 00:11:36.872 fused_ordering(509) 00:11:36.872 fused_ordering(510) 00:11:36.872 fused_ordering(511) 00:11:36.872 fused_ordering(512) 00:11:36.872 fused_ordering(513) 00:11:36.872 fused_ordering(514) 00:11:36.872 fused_ordering(515) 00:11:36.872 fused_ordering(516) 00:11:36.872 fused_ordering(517) 00:11:36.872 fused_ordering(518) 00:11:36.872 fused_ordering(519) 00:11:36.872 fused_ordering(520) 00:11:36.872 fused_ordering(521) 00:11:36.872 fused_ordering(522) 00:11:36.872 fused_ordering(523) 00:11:36.872 fused_ordering(524) 00:11:36.872 fused_ordering(525) 00:11:36.872 fused_ordering(526) 00:11:36.872 fused_ordering(527) 00:11:36.872 fused_ordering(528) 00:11:36.872 fused_ordering(529) 00:11:36.872 fused_ordering(530) 00:11:36.872 fused_ordering(531) 00:11:36.872 fused_ordering(532) 00:11:36.872 fused_ordering(533) 00:11:36.872 fused_ordering(534) 00:11:36.872 fused_ordering(535) 00:11:36.872 fused_ordering(536) 00:11:36.872 fused_ordering(537) 00:11:36.872 fused_ordering(538) 00:11:36.872 fused_ordering(539) 00:11:36.872 fused_ordering(540) 00:11:36.872 fused_ordering(541) 00:11:36.872 fused_ordering(542) 00:11:36.872 fused_ordering(543) 00:11:36.872 fused_ordering(544) 00:11:36.872 fused_ordering(545) 00:11:36.872 fused_ordering(546) 00:11:36.872 fused_ordering(547) 00:11:36.872 fused_ordering(548) 00:11:36.872 fused_ordering(549) 00:11:36.872 fused_ordering(550) 00:11:36.872 fused_ordering(551) 00:11:36.872 fused_ordering(552) 00:11:36.872 fused_ordering(553) 00:11:36.872 fused_ordering(554) 00:11:36.872 fused_ordering(555) 00:11:36.872 fused_ordering(556) 00:11:36.872 fused_ordering(557) 00:11:36.872 fused_ordering(558) 00:11:36.872 fused_ordering(559) 00:11:36.872 fused_ordering(560) 00:11:36.872 fused_ordering(561) 00:11:36.872 fused_ordering(562) 00:11:36.872 fused_ordering(563) 00:11:36.872 fused_ordering(564) 00:11:36.872 fused_ordering(565) 00:11:36.872 fused_ordering(566) 00:11:36.872 fused_ordering(567) 00:11:36.872 fused_ordering(568) 00:11:36.872 fused_ordering(569) 00:11:36.872 fused_ordering(570) 00:11:36.872 fused_ordering(571) 00:11:36.872 fused_ordering(572) 00:11:36.872 fused_ordering(573) 00:11:36.872 fused_ordering(574) 00:11:36.872 fused_ordering(575) 00:11:36.872 fused_ordering(576) 00:11:36.872 fused_ordering(577) 00:11:36.872 fused_ordering(578) 00:11:36.872 fused_ordering(579) 00:11:36.872 fused_ordering(580) 00:11:36.872 fused_ordering(581) 00:11:36.872 fused_ordering(582) 00:11:36.872 fused_ordering(583) 00:11:36.872 fused_ordering(584) 00:11:36.872 fused_ordering(585) 00:11:36.872 fused_ordering(586) 00:11:36.872 fused_ordering(587) 00:11:36.872 fused_ordering(588) 00:11:36.872 fused_ordering(589) 00:11:36.872 fused_ordering(590) 00:11:36.872 fused_ordering(591) 00:11:36.872 fused_ordering(592) 00:11:36.872 fused_ordering(593) 00:11:36.872 fused_ordering(594) 00:11:36.872 fused_ordering(595) 00:11:36.872 fused_ordering(596) 00:11:36.872 fused_ordering(597) 00:11:36.872 fused_ordering(598) 00:11:36.872 fused_ordering(599) 00:11:36.872 fused_ordering(600) 00:11:36.872 fused_ordering(601) 00:11:36.872 fused_ordering(602) 00:11:36.872 fused_ordering(603) 00:11:36.872 fused_ordering(604) 00:11:36.872 fused_ordering(605) 00:11:36.872 fused_ordering(606) 00:11:36.872 fused_ordering(607) 00:11:36.872 fused_ordering(608) 00:11:36.872 fused_ordering(609) 00:11:36.872 fused_ordering(610) 00:11:36.872 fused_ordering(611) 00:11:36.872 fused_ordering(612) 00:11:36.872 fused_ordering(613) 00:11:36.872 fused_ordering(614) 00:11:36.872 fused_ordering(615) 00:11:37.439 fused_ordering(616) 00:11:37.439 fused_ordering(617) 00:11:37.439 fused_ordering(618) 00:11:37.439 fused_ordering(619) 00:11:37.439 fused_ordering(620) 00:11:37.439 fused_ordering(621) 00:11:37.439 fused_ordering(622) 00:11:37.439 fused_ordering(623) 00:11:37.439 fused_ordering(624) 00:11:37.439 fused_ordering(625) 00:11:37.439 fused_ordering(626) 00:11:37.439 fused_ordering(627) 00:11:37.439 fused_ordering(628) 00:11:37.439 fused_ordering(629) 00:11:37.439 fused_ordering(630) 00:11:37.439 fused_ordering(631) 00:11:37.439 fused_ordering(632) 00:11:37.439 fused_ordering(633) 00:11:37.439 fused_ordering(634) 00:11:37.439 fused_ordering(635) 00:11:37.439 fused_ordering(636) 00:11:37.439 fused_ordering(637) 00:11:37.439 fused_ordering(638) 00:11:37.439 fused_ordering(639) 00:11:37.439 fused_ordering(640) 00:11:37.439 fused_ordering(641) 00:11:37.439 fused_ordering(642) 00:11:37.439 fused_ordering(643) 00:11:37.439 fused_ordering(644) 00:11:37.439 fused_ordering(645) 00:11:37.439 fused_ordering(646) 00:11:37.439 fused_ordering(647) 00:11:37.439 fused_ordering(648) 00:11:37.439 fused_ordering(649) 00:11:37.439 fused_ordering(650) 00:11:37.439 fused_ordering(651) 00:11:37.439 fused_ordering(652) 00:11:37.439 fused_ordering(653) 00:11:37.439 fused_ordering(654) 00:11:37.439 fused_ordering(655) 00:11:37.439 fused_ordering(656) 00:11:37.439 fused_ordering(657) 00:11:37.439 fused_ordering(658) 00:11:37.439 fused_ordering(659) 00:11:37.439 fused_ordering(660) 00:11:37.439 fused_ordering(661) 00:11:37.439 fused_ordering(662) 00:11:37.439 fused_ordering(663) 00:11:37.439 fused_ordering(664) 00:11:37.439 fused_ordering(665) 00:11:37.439 fused_ordering(666) 00:11:37.439 fused_ordering(667) 00:11:37.439 fused_ordering(668) 00:11:37.439 fused_ordering(669) 00:11:37.439 fused_ordering(670) 00:11:37.439 fused_ordering(671) 00:11:37.439 fused_ordering(672) 00:11:37.439 fused_ordering(673) 00:11:37.439 fused_ordering(674) 00:11:37.439 fused_ordering(675) 00:11:37.439 fused_ordering(676) 00:11:37.439 fused_ordering(677) 00:11:37.439 fused_ordering(678) 00:11:37.439 fused_ordering(679) 00:11:37.439 fused_ordering(680) 00:11:37.439 fused_ordering(681) 00:11:37.439 fused_ordering(682) 00:11:37.439 fused_ordering(683) 00:11:37.439 fused_ordering(684) 00:11:37.439 fused_ordering(685) 00:11:37.439 fused_ordering(686) 00:11:37.439 fused_ordering(687) 00:11:37.439 fused_ordering(688) 00:11:37.439 fused_ordering(689) 00:11:37.439 fused_ordering(690) 00:11:37.439 fused_ordering(691) 00:11:37.439 fused_ordering(692) 00:11:37.439 fused_ordering(693) 00:11:37.439 fused_ordering(694) 00:11:37.439 fused_ordering(695) 00:11:37.439 fused_ordering(696) 00:11:37.439 fused_ordering(697) 00:11:37.439 fused_ordering(698) 00:11:37.439 fused_ordering(699) 00:11:37.439 fused_ordering(700) 00:11:37.439 fused_ordering(701) 00:11:37.439 fused_ordering(702) 00:11:37.439 fused_ordering(703) 00:11:37.439 fused_ordering(704) 00:11:37.439 fused_ordering(705) 00:11:37.439 fused_ordering(706) 00:11:37.439 fused_ordering(707) 00:11:37.439 fused_ordering(708) 00:11:37.439 fused_ordering(709) 00:11:37.439 fused_ordering(710) 00:11:37.439 fused_ordering(711) 00:11:37.439 fused_ordering(712) 00:11:37.439 fused_ordering(713) 00:11:37.439 fused_ordering(714) 00:11:37.439 fused_ordering(715) 00:11:37.439 fused_ordering(716) 00:11:37.439 fused_ordering(717) 00:11:37.439 fused_ordering(718) 00:11:37.439 fused_ordering(719) 00:11:37.439 fused_ordering(720) 00:11:37.439 fused_ordering(721) 00:11:37.439 fused_ordering(722) 00:11:37.439 fused_ordering(723) 00:11:37.439 fused_ordering(724) 00:11:37.439 fused_ordering(725) 00:11:37.439 fused_ordering(726) 00:11:37.439 fused_ordering(727) 00:11:37.439 fused_ordering(728) 00:11:37.439 fused_ordering(729) 00:11:37.439 fused_ordering(730) 00:11:37.439 fused_ordering(731) 00:11:37.439 fused_ordering(732) 00:11:37.439 fused_ordering(733) 00:11:37.439 fused_ordering(734) 00:11:37.439 fused_ordering(735) 00:11:37.439 fused_ordering(736) 00:11:37.439 fused_ordering(737) 00:11:37.439 fused_ordering(738) 00:11:37.439 fused_ordering(739) 00:11:37.439 fused_ordering(740) 00:11:37.439 fused_ordering(741) 00:11:37.439 fused_ordering(742) 00:11:37.439 fused_ordering(743) 00:11:37.439 fused_ordering(744) 00:11:37.439 fused_ordering(745) 00:11:37.439 fused_ordering(746) 00:11:37.439 fused_ordering(747) 00:11:37.439 fused_ordering(748) 00:11:37.439 fused_ordering(749) 00:11:37.439 fused_ordering(750) 00:11:37.439 fused_ordering(751) 00:11:37.439 fused_ordering(752) 00:11:37.439 fused_ordering(753) 00:11:37.439 fused_ordering(754) 00:11:37.439 fused_ordering(755) 00:11:37.439 fused_ordering(756) 00:11:37.439 fused_ordering(757) 00:11:37.439 fused_ordering(758) 00:11:37.439 fused_ordering(759) 00:11:37.439 fused_ordering(760) 00:11:37.439 fused_ordering(761) 00:11:37.439 fused_ordering(762) 00:11:37.439 fused_ordering(763) 00:11:37.439 fused_ordering(764) 00:11:37.439 fused_ordering(765) 00:11:37.439 fused_ordering(766) 00:11:37.439 fused_ordering(767) 00:11:37.439 fused_ordering(768) 00:11:37.439 fused_ordering(769) 00:11:37.439 fused_ordering(770) 00:11:37.439 fused_ordering(771) 00:11:37.439 fused_ordering(772) 00:11:37.439 fused_ordering(773) 00:11:37.439 fused_ordering(774) 00:11:37.439 fused_ordering(775) 00:11:37.439 fused_ordering(776) 00:11:37.439 fused_ordering(777) 00:11:37.439 fused_ordering(778) 00:11:37.439 fused_ordering(779) 00:11:37.439 fused_ordering(780) 00:11:37.439 fused_ordering(781) 00:11:37.439 fused_ordering(782) 00:11:37.439 fused_ordering(783) 00:11:37.439 fused_ordering(784) 00:11:37.439 fused_ordering(785) 00:11:37.439 fused_ordering(786) 00:11:37.439 fused_ordering(787) 00:11:37.439 fused_ordering(788) 00:11:37.439 fused_ordering(789) 00:11:37.439 fused_ordering(790) 00:11:37.439 fused_ordering(791) 00:11:37.439 fused_ordering(792) 00:11:37.439 fused_ordering(793) 00:11:37.439 fused_ordering(794) 00:11:37.439 fused_ordering(795) 00:11:37.439 fused_ordering(796) 00:11:37.439 fused_ordering(797) 00:11:37.439 fused_ordering(798) 00:11:37.439 fused_ordering(799) 00:11:37.439 fused_ordering(800) 00:11:37.439 fused_ordering(801) 00:11:37.439 fused_ordering(802) 00:11:37.439 fused_ordering(803) 00:11:37.439 fused_ordering(804) 00:11:37.439 fused_ordering(805) 00:11:37.439 fused_ordering(806) 00:11:37.439 fused_ordering(807) 00:11:37.439 fused_ordering(808) 00:11:37.439 fused_ordering(809) 00:11:37.439 fused_ordering(810) 00:11:37.439 fused_ordering(811) 00:11:37.439 fused_ordering(812) 00:11:37.439 fused_ordering(813) 00:11:37.439 fused_ordering(814) 00:11:37.439 fused_ordering(815) 00:11:37.439 fused_ordering(816) 00:11:37.439 fused_ordering(817) 00:11:37.439 fused_ordering(818) 00:11:37.439 fused_ordering(819) 00:11:37.439 fused_ordering(820) 00:11:38.005 fused_ordering(821) 00:11:38.005 fused_ordering(822) 00:11:38.005 fused_ordering(823) 00:11:38.005 fused_ordering(824) 00:11:38.005 fused_ordering(825) 00:11:38.005 fused_ordering(826) 00:11:38.005 fused_ordering(827) 00:11:38.005 fused_ordering(828) 00:11:38.005 fused_ordering(829) 00:11:38.005 fused_ordering(830) 00:11:38.005 fused_ordering(831) 00:11:38.005 fused_ordering(832) 00:11:38.005 fused_ordering(833) 00:11:38.005 fused_ordering(834) 00:11:38.005 fused_ordering(835) 00:11:38.005 fused_ordering(836) 00:11:38.005 fused_ordering(837) 00:11:38.005 fused_ordering(838) 00:11:38.005 fused_ordering(839) 00:11:38.005 fused_ordering(840) 00:11:38.005 fused_ordering(841) 00:11:38.005 fused_ordering(842) 00:11:38.005 fused_ordering(843) 00:11:38.005 fused_ordering(844) 00:11:38.005 fused_ordering(845) 00:11:38.005 fused_ordering(846) 00:11:38.005 fused_ordering(847) 00:11:38.005 fused_ordering(848) 00:11:38.005 fused_ordering(849) 00:11:38.005 fused_ordering(850) 00:11:38.005 fused_ordering(851) 00:11:38.005 fused_ordering(852) 00:11:38.005 fused_ordering(853) 00:11:38.005 fused_ordering(854) 00:11:38.005 fused_ordering(855) 00:11:38.005 fused_ordering(856) 00:11:38.005 fused_ordering(857) 00:11:38.005 fused_ordering(858) 00:11:38.005 fused_ordering(859) 00:11:38.005 fused_ordering(860) 00:11:38.005 fused_ordering(861) 00:11:38.005 fused_ordering(862) 00:11:38.005 fused_ordering(863) 00:11:38.005 fused_ordering(864) 00:11:38.005 fused_ordering(865) 00:11:38.005 fused_ordering(866) 00:11:38.005 fused_ordering(867) 00:11:38.005 fused_ordering(868) 00:11:38.005 fused_ordering(869) 00:11:38.005 fused_ordering(870) 00:11:38.005 fused_ordering(871) 00:11:38.005 fused_ordering(872) 00:11:38.005 fused_ordering(873) 00:11:38.005 fused_ordering(874) 00:11:38.005 fused_ordering(875) 00:11:38.005 fused_ordering(876) 00:11:38.005 fused_ordering(877) 00:11:38.005 fused_ordering(878) 00:11:38.005 fused_ordering(879) 00:11:38.005 fused_ordering(880) 00:11:38.005 fused_ordering(881) 00:11:38.005 fused_ordering(882) 00:11:38.005 fused_ordering(883) 00:11:38.005 fused_ordering(884) 00:11:38.005 fused_ordering(885) 00:11:38.005 fused_ordering(886) 00:11:38.005 fused_ordering(887) 00:11:38.005 fused_ordering(888) 00:11:38.005 fused_ordering(889) 00:11:38.005 fused_ordering(890) 00:11:38.005 fused_ordering(891) 00:11:38.005 fused_ordering(892) 00:11:38.005 fused_ordering(893) 00:11:38.005 fused_ordering(894) 00:11:38.005 fused_ordering(895) 00:11:38.005 fused_ordering(896) 00:11:38.005 fused_ordering(897) 00:11:38.005 fused_ordering(898) 00:11:38.005 fused_ordering(899) 00:11:38.005 fused_ordering(900) 00:11:38.005 fused_ordering(901) 00:11:38.005 fused_ordering(902) 00:11:38.005 fused_ordering(903) 00:11:38.005 fused_ordering(904) 00:11:38.005 fused_ordering(905) 00:11:38.005 fused_ordering(906) 00:11:38.005 fused_ordering(907) 00:11:38.005 fused_ordering(908) 00:11:38.005 fused_ordering(909) 00:11:38.005 fused_ordering(910) 00:11:38.005 fused_ordering(911) 00:11:38.005 fused_ordering(912) 00:11:38.005 fused_ordering(913) 00:11:38.005 fused_ordering(914) 00:11:38.005 fused_ordering(915) 00:11:38.005 fused_ordering(916) 00:11:38.005 fused_ordering(917) 00:11:38.005 fused_ordering(918) 00:11:38.005 fused_ordering(919) 00:11:38.005 fused_ordering(920) 00:11:38.005 fused_ordering(921) 00:11:38.005 fused_ordering(922) 00:11:38.005 fused_ordering(923) 00:11:38.005 fused_ordering(924) 00:11:38.005 fused_ordering(925) 00:11:38.005 fused_ordering(926) 00:11:38.005 fused_ordering(927) 00:11:38.005 fused_ordering(928) 00:11:38.005 fused_ordering(929) 00:11:38.005 fused_ordering(930) 00:11:38.005 fused_ordering(931) 00:11:38.005 fused_ordering(932) 00:11:38.005 fused_ordering(933) 00:11:38.005 fused_ordering(934) 00:11:38.005 fused_ordering(935) 00:11:38.005 fused_ordering(936) 00:11:38.005 fused_ordering(937) 00:11:38.005 fused_ordering(938) 00:11:38.005 fused_ordering(939) 00:11:38.005 fused_ordering(940) 00:11:38.005 fused_ordering(941) 00:11:38.005 fused_ordering(942) 00:11:38.005 fused_ordering(943) 00:11:38.005 fused_ordering(944) 00:11:38.005 fused_ordering(945) 00:11:38.005 fused_ordering(946) 00:11:38.005 fused_ordering(947) 00:11:38.005 fused_ordering(948) 00:11:38.005 fused_ordering(949) 00:11:38.005 fused_ordering(950) 00:11:38.005 fused_ordering(951) 00:11:38.005 fused_ordering(952) 00:11:38.005 fused_ordering(953) 00:11:38.005 fused_ordering(954) 00:11:38.005 fused_ordering(955) 00:11:38.005 fused_ordering(956) 00:11:38.005 fused_ordering(957) 00:11:38.005 fused_ordering(958) 00:11:38.005 fused_ordering(959) 00:11:38.005 fused_ordering(960) 00:11:38.005 fused_ordering(961) 00:11:38.005 fused_ordering(962) 00:11:38.005 fused_ordering(963) 00:11:38.005 fused_ordering(964) 00:11:38.005 fused_ordering(965) 00:11:38.005 fused_ordering(966) 00:11:38.005 fused_ordering(967) 00:11:38.005 fused_ordering(968) 00:11:38.005 fused_ordering(969) 00:11:38.005 fused_ordering(970) 00:11:38.005 fused_ordering(971) 00:11:38.005 fused_ordering(972) 00:11:38.005 fused_ordering(973) 00:11:38.005 fused_ordering(974) 00:11:38.005 fused_ordering(975) 00:11:38.005 fused_ordering(976) 00:11:38.005 fused_ordering(977) 00:11:38.005 fused_ordering(978) 00:11:38.005 fused_ordering(979) 00:11:38.005 fused_ordering(980) 00:11:38.005 fused_ordering(981) 00:11:38.005 fused_ordering(982) 00:11:38.005 fused_ordering(983) 00:11:38.005 fused_ordering(984) 00:11:38.005 fused_ordering(985) 00:11:38.005 fused_ordering(986) 00:11:38.005 fused_ordering(987) 00:11:38.005 fused_ordering(988) 00:11:38.005 fused_ordering(989) 00:11:38.005 fused_ordering(990) 00:11:38.005 fused_ordering(991) 00:11:38.005 fused_ordering(992) 00:11:38.005 fused_ordering(993) 00:11:38.005 fused_ordering(994) 00:11:38.005 fused_ordering(995) 00:11:38.005 fused_ordering(996) 00:11:38.005 fused_ordering(997) 00:11:38.005 fused_ordering(998) 00:11:38.005 fused_ordering(999) 00:11:38.005 fused_ordering(1000) 00:11:38.005 fused_ordering(1001) 00:11:38.005 fused_ordering(1002) 00:11:38.005 fused_ordering(1003) 00:11:38.005 fused_ordering(1004) 00:11:38.005 fused_ordering(1005) 00:11:38.005 fused_ordering(1006) 00:11:38.005 fused_ordering(1007) 00:11:38.005 fused_ordering(1008) 00:11:38.005 fused_ordering(1009) 00:11:38.005 fused_ordering(1010) 00:11:38.005 fused_ordering(1011) 00:11:38.005 fused_ordering(1012) 00:11:38.005 fused_ordering(1013) 00:11:38.005 fused_ordering(1014) 00:11:38.005 fused_ordering(1015) 00:11:38.005 fused_ordering(1016) 00:11:38.005 fused_ordering(1017) 00:11:38.005 fused_ordering(1018) 00:11:38.005 fused_ordering(1019) 00:11:38.005 fused_ordering(1020) 00:11:38.005 fused_ordering(1021) 00:11:38.005 fused_ordering(1022) 00:11:38.005 fused_ordering(1023) 00:11:38.005 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.006 rmmod nvme_tcp 00:11:38.006 rmmod nvme_fabrics 00:11:38.006 rmmod nvme_keyring 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2466599 ']' 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2466599 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2466599 ']' 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2466599 00:11:38.006 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2466599 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2466599' 00:11:38.263 killing process with pid 2466599 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2466599 00:11:38.263 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2466599 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.523 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.432 00:11:40.432 real 0m7.286s 00:11:40.432 user 0m4.976s 00:11:40.432 sys 0m2.983s 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.432 ************************************ 00:11:40.432 END TEST nvmf_fused_ordering 00:11:40.432 ************************************ 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.432 ************************************ 00:11:40.432 START TEST nvmf_ns_masking 00:11:40.432 ************************************ 00:11:40.432 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:40.432 * Looking for test storage... 00:11:40.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.693 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.694 --rc genhtml_branch_coverage=1 00:11:40.694 --rc genhtml_function_coverage=1 00:11:40.694 --rc genhtml_legend=1 00:11:40.694 --rc geninfo_all_blocks=1 00:11:40.694 --rc geninfo_unexecuted_blocks=1 00:11:40.694 00:11:40.694 ' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.694 --rc genhtml_branch_coverage=1 00:11:40.694 --rc genhtml_function_coverage=1 00:11:40.694 --rc genhtml_legend=1 00:11:40.694 --rc geninfo_all_blocks=1 00:11:40.694 --rc geninfo_unexecuted_blocks=1 00:11:40.694 00:11:40.694 ' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.694 --rc genhtml_branch_coverage=1 00:11:40.694 --rc genhtml_function_coverage=1 00:11:40.694 --rc genhtml_legend=1 00:11:40.694 --rc geninfo_all_blocks=1 00:11:40.694 --rc geninfo_unexecuted_blocks=1 00:11:40.694 00:11:40.694 ' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.694 --rc genhtml_branch_coverage=1 00:11:40.694 --rc genhtml_function_coverage=1 00:11:40.694 --rc genhtml_legend=1 00:11:40.694 --rc geninfo_all_blocks=1 00:11:40.694 --rc geninfo_unexecuted_blocks=1 00:11:40.694 00:11:40.694 ' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6e9ada7c-d8c9-4928-b852-94c5c43ff5d2 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=56e9b5db-06f4-4539-bbf7-de91afecbbf6 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6cbf262a-5aae-4f49-8feb-917104960ebe 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.694 07:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:43.227 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:43.227 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:43.227 Found net devices under 0000:09:00.0: cvl_0_0 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.227 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:43.227 Found net devices under 0000:09:00.1: cvl_0_1 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:11:43.228 00:11:43.228 --- 10.0.0.2 ping statistics --- 00:11:43.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.228 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:43.228 00:11:43.228 --- 10.0.0.1 ping statistics --- 00:11:43.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.228 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2468928 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2468928 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2468928 ']' 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 [2024-11-20 07:13:46.363651] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:11:43.228 [2024-11-20 07:13:46.363725] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.228 [2024-11-20 07:13:46.435050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.228 [2024-11-20 07:13:46.490712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.228 [2024-11-20 07:13:46.490769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.228 [2024-11-20 07:13:46.490791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.228 [2024-11-20 07:13:46.490807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.228 [2024-11-20 07:13:46.490836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.228 [2024-11-20 07:13:46.491485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.228 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:43.486 [2024-11-20 07:13:46.887530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.486 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:43.486 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:43.486 07:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:44.051 Malloc1 00:11:44.051 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.309 Malloc2 00:11:44.309 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.567 07:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:44.824 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.082 [2024-11-20 07:13:48.294450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.082 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:45.082 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cbf262a-5aae-4f49-8feb-917104960ebe -a 10.0.0.2 -s 4420 -i 4 00:11:45.339 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.339 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:11:45.340 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.340 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:45.340 07:13:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:47.237 [ 0]:0x1 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.237 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.495 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c512caf107d943dabddf0ac4f2147950 00:11:47.495 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c512caf107d943dabddf0ac4f2147950 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.495 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:47.752 [ 0]:0x1 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c512caf107d943dabddf0ac4f2147950 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c512caf107d943dabddf0ac4f2147950 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:47.752 [ 1]:0x2 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:47.752 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.010 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.267 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:48.524 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:48.524 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cbf262a-5aae-4f49-8feb-917104960ebe -a 10.0.0.2 -s 4420 -i 4 00:11:48.782 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:48.782 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:11:48.782 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.782 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:11:48.782 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:11:48.782 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.681 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:50.939 [ 0]:0x2 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.939 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.198 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:51.198 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.198 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.198 [ 0]:0x1 00:11:51.198 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.198 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c512caf107d943dabddf0ac4f2147950 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c512caf107d943dabddf0ac4f2147950 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.456 [ 1]:0x2 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.456 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.714 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:51.714 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.715 [ 0]:0x2 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.715 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cbf262a-5aae-4f49-8feb-917104960ebe -a 10.0.0.2 -s 4420 -i 4 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:11:52.280 07:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:11:54.179 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:54.179 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:54.179 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.437 [ 0]:0x1 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c512caf107d943dabddf0ac4f2147950 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c512caf107d943dabddf0ac4f2147950 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.437 [ 1]:0x2 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.437 07:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:54.695 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.953 [ 0]:0x2 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.953 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:55.211 [2024-11-20 07:13:58.473063] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:55.211 request: 00:11:55.211 { 00:11:55.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.211 "nsid": 2, 00:11:55.211 "host": "nqn.2016-06.io.spdk:host1", 00:11:55.211 "method": "nvmf_ns_remove_host", 00:11:55.211 "req_id": 1 00:11:55.211 } 00:11:55.211 Got JSON-RPC error response 00:11:55.211 response: 00:11:55.211 { 00:11:55.211 "code": -32602, 00:11:55.211 "message": "Invalid parameters" 00:11:55.211 } 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:55.211 [ 0]:0x2 00:11:55.211 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce044932e1144dbbaf45dc8f8bf0a2c8 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce044932e1144dbbaf45dc8f8bf0a2c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2470553 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2470553 /var/tmp/host.sock 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2470553 ']' 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:55.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:55.212 07:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.470 [2024-11-20 07:13:58.667813] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:11:55.470 [2024-11-20 07:13:58.667895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2470553 ] 00:11:55.470 [2024-11-20 07:13:58.734733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.470 [2024-11-20 07:13:58.793142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.728 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.728 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:11:55.728 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.986 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.552 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6e9ada7c-d8c9-4928-b852-94c5c43ff5d2 00:11:56.552 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:56.552 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6E9ADA7CD8C94928B85294C5C43FF5D2 -i 00:11:56.552 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 56e9b5db-06f4-4539-bbf7-de91afecbbf6 00:11:56.552 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:56.552 07:13:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 56E9B5DB06F44539BBF7DE91AFECBBF6 -i 00:11:57.117 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:57.117 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:57.683 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:57.683 07:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:57.941 nvme0n1 00:11:57.941 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:57.941 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:58.506 nvme1n2 00:11:58.506 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:58.506 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:58.506 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:58.506 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:58.506 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:58.506 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:58.764 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:58.764 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:58.764 07:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:59.021 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6e9ada7c-d8c9-4928-b852-94c5c43ff5d2 == \6\e\9\a\d\a\7\c\-\d\8\c\9\-\4\9\2\8\-\b\8\5\2\-\9\4\c\5\c\4\3\f\f\5\d\2 ]] 00:11:59.021 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:59.021 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:59.021 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:59.279 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 56e9b5db-06f4-4539-bbf7-de91afecbbf6 == \5\6\e\9\b\5\d\b\-\0\6\f\4\-\4\5\3\9\-\b\b\f\7\-\d\e\9\1\a\f\e\c\b\b\f\6 ]] 00:11:59.279 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.537 07:14:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.794 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6e9ada7c-d8c9-4928-b852-94c5c43ff5d2 00:11:59.794 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:59.794 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6E9ADA7CD8C94928B85294C5C43FF5D2 00:11:59.794 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6E9ADA7CD8C94928B85294C5C43FF5D2 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:59.795 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6E9ADA7CD8C94928B85294C5C43FF5D2 00:12:00.053 [2024-11-20 07:14:03.327203] bdev.c:8621:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:00.053 [2024-11-20 07:14:03.327243] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:00.053 [2024-11-20 07:14:03.327272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:00.053 request: 00:12:00.053 { 00:12:00.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.053 "namespace": { 00:12:00.053 "bdev_name": "invalid", 00:12:00.053 "nsid": 1, 00:12:00.053 "nguid": "6E9ADA7CD8C94928B85294C5C43FF5D2", 00:12:00.053 "no_auto_visible": false 00:12:00.053 }, 00:12:00.053 "method": "nvmf_subsystem_add_ns", 00:12:00.053 "req_id": 1 00:12:00.053 } 00:12:00.053 Got JSON-RPC error response 00:12:00.053 response: 00:12:00.053 { 00:12:00.053 "code": -32602, 00:12:00.053 "message": "Invalid parameters" 00:12:00.053 } 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6e9ada7c-d8c9-4928-b852-94c5c43ff5d2 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:00.053 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6E9ADA7CD8C94928B85294C5C43FF5D2 -i 00:12:00.310 07:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:02.284 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:02.284 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:02.284 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2470553 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2470553 ']' 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2470553 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2470553 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2470553' 00:12:02.566 killing process with pid 2470553 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2470553 00:12:02.566 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2470553 00:12:03.131 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.388 rmmod nvme_tcp 00:12:03.388 rmmod nvme_fabrics 00:12:03.388 rmmod nvme_keyring 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2468928 ']' 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2468928 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2468928 ']' 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2468928 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2468928 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2468928' 00:12:03.388 killing process with pid 2468928 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2468928 00:12:03.388 07:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2468928 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.647 07:14:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.185 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.185 00:12:06.185 real 0m25.245s 00:12:06.185 user 0m36.842s 00:12:06.185 sys 0m4.679s 00:12:06.185 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:06.185 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:06.185 ************************************ 00:12:06.185 END TEST nvmf_ns_masking 00:12:06.185 ************************************ 00:12:06.185 07:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:06.185 07:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.186 ************************************ 00:12:06.186 START TEST nvmf_nvme_cli 00:12:06.186 ************************************ 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:06.186 * Looking for test storage... 00:12:06.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.186 --rc genhtml_branch_coverage=1 00:12:06.186 --rc genhtml_function_coverage=1 00:12:06.186 --rc genhtml_legend=1 00:12:06.186 --rc geninfo_all_blocks=1 00:12:06.186 --rc geninfo_unexecuted_blocks=1 00:12:06.186 00:12:06.186 ' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.186 --rc genhtml_branch_coverage=1 00:12:06.186 --rc genhtml_function_coverage=1 00:12:06.186 --rc genhtml_legend=1 00:12:06.186 --rc geninfo_all_blocks=1 00:12:06.186 --rc geninfo_unexecuted_blocks=1 00:12:06.186 00:12:06.186 ' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.186 --rc genhtml_branch_coverage=1 00:12:06.186 --rc genhtml_function_coverage=1 00:12:06.186 --rc genhtml_legend=1 00:12:06.186 --rc geninfo_all_blocks=1 00:12:06.186 --rc geninfo_unexecuted_blocks=1 00:12:06.186 00:12:06.186 ' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:06.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.186 --rc genhtml_branch_coverage=1 00:12:06.186 --rc genhtml_function_coverage=1 00:12:06.186 --rc genhtml_legend=1 00:12:06.186 --rc geninfo_all_blocks=1 00:12:06.186 --rc geninfo_unexecuted_blocks=1 00:12:06.186 00:12:06.186 ' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.186 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.187 07:14:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:08.087 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:08.087 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.087 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:08.088 Found net devices under 0000:09:00.0: cvl_0_0 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:08.088 Found net devices under 0000:09:00.1: cvl_0_1 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.088 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:12:08.347 00:12:08.347 --- 10.0.0.2 ping statistics --- 00:12:08.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.347 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:12:08.347 00:12:08.347 --- 10.0.0.1 ping statistics --- 00:12:08.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.347 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2473474 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2473474 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2473474 ']' 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.347 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.347 [2024-11-20 07:14:11.664932] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:12:08.347 [2024-11-20 07:14:11.665028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.347 [2024-11-20 07:14:11.742826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.605 [2024-11-20 07:14:11.805165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.605 [2024-11-20 07:14:11.805215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.605 [2024-11-20 07:14:11.805244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.605 [2024-11-20 07:14:11.805256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.605 [2024-11-20 07:14:11.805266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.605 [2024-11-20 07:14:11.806934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.605 [2024-11-20 07:14:11.806990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.605 [2024-11-20 07:14:11.807059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.605 [2024-11-20 07:14:11.807062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.605 [2024-11-20 07:14:11.969783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.605 07:14:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.605 Malloc0 00:12:08.605 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.605 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:08.605 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.605 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.862 Malloc1 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.862 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.862 [2024-11-20 07:14:12.073157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:08.863 00:12:08.863 Discovery Log Number of Records 2, Generation counter 2 00:12:08.863 =====Discovery Log Entry 0====== 00:12:08.863 trtype: tcp 00:12:08.863 adrfam: ipv4 00:12:08.863 subtype: current discovery subsystem 00:12:08.863 treq: not required 00:12:08.863 portid: 0 00:12:08.863 trsvcid: 4420 00:12:08.863 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.863 traddr: 10.0.0.2 00:12:08.863 eflags: explicit discovery connections, duplicate discovery information 00:12:08.863 sectype: none 00:12:08.863 =====Discovery Log Entry 1====== 00:12:08.863 trtype: tcp 00:12:08.863 adrfam: ipv4 00:12:08.863 subtype: nvme subsystem 00:12:08.863 treq: not required 00:12:08.863 portid: 0 00:12:08.863 trsvcid: 4420 00:12:08.863 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:08.863 traddr: 10.0.0.2 00:12:08.863 eflags: none 00:12:08.863 sectype: none 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:08.863 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.796 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:09.796 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:12:09.796 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.796 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:09.796 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:09.796 07:14:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:11.701 07:14:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.701 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:11.701 /dev/nvme0n2 ]] 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.702 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.702 rmmod nvme_tcp 00:12:11.960 rmmod nvme_fabrics 00:12:11.960 rmmod nvme_keyring 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2473474 ']' 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2473474 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2473474 ']' 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2473474 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2473474 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2473474' 00:12:11.960 killing process with pid 2473474 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2473474 00:12:11.960 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2473474 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.218 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.219 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.219 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.124 00:12:14.124 real 0m8.407s 00:12:14.124 user 0m15.188s 00:12:14.124 sys 0m2.413s 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.124 ************************************ 00:12:14.124 END TEST nvmf_nvme_cli 00:12:14.124 ************************************ 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:14.124 07:14:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.386 ************************************ 00:12:14.386 START TEST nvmf_vfio_user 00:12:14.386 ************************************ 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:14.386 * Looking for test storage... 00:12:14.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.386 --rc genhtml_branch_coverage=1 00:12:14.386 --rc genhtml_function_coverage=1 00:12:14.386 --rc genhtml_legend=1 00:12:14.386 --rc geninfo_all_blocks=1 00:12:14.386 --rc geninfo_unexecuted_blocks=1 00:12:14.386 00:12:14.386 ' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.386 --rc genhtml_branch_coverage=1 00:12:14.386 --rc genhtml_function_coverage=1 00:12:14.386 --rc genhtml_legend=1 00:12:14.386 --rc geninfo_all_blocks=1 00:12:14.386 --rc geninfo_unexecuted_blocks=1 00:12:14.386 00:12:14.386 ' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.386 --rc genhtml_branch_coverage=1 00:12:14.386 --rc genhtml_function_coverage=1 00:12:14.386 --rc genhtml_legend=1 00:12:14.386 --rc geninfo_all_blocks=1 00:12:14.386 --rc geninfo_unexecuted_blocks=1 00:12:14.386 00:12:14.386 ' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.386 --rc genhtml_branch_coverage=1 00:12:14.386 --rc genhtml_function_coverage=1 00:12:14.386 --rc genhtml_legend=1 00:12:14.386 --rc geninfo_all_blocks=1 00:12:14.386 --rc geninfo_unexecuted_blocks=1 00:12:14.386 00:12:14.386 ' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.386 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2474401 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2474401' 00:12:14.387 Process pid: 2474401 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2474401 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2474401 ']' 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:14.387 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:14.387 [2024-11-20 07:14:17.791011] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:12:14.387 [2024-11-20 07:14:17.791102] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.645 [2024-11-20 07:14:17.859476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.645 [2024-11-20 07:14:17.919846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.645 [2024-11-20 07:14:17.919896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.645 [2024-11-20 07:14:17.919924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.645 [2024-11-20 07:14:17.919935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.645 [2024-11-20 07:14:17.919944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.645 [2024-11-20 07:14:17.921462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.645 [2024-11-20 07:14:17.921522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.645 [2024-11-20 07:14:17.921588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.645 [2024-11-20 07:14:17.921592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.645 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:14.645 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:12:14.645 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:16.014 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:16.014 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:16.014 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:16.014 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:16.014 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:16.014 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:16.271 Malloc1 00:12:16.271 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:16.529 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:16.786 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:17.046 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:17.046 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:17.046 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:17.609 Malloc2 00:12:17.609 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:17.866 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:18.123 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:18.382 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:18.382 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:18.382 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.382 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:18.382 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:18.382 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:18.382 [2024-11-20 07:14:21.601265] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:12:18.382 [2024-11-20 07:14:21.601330] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474831 ] 00:12:18.382 [2024-11-20 07:14:21.653309] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:18.382 [2024-11-20 07:14:21.663796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:18.382 [2024-11-20 07:14:21.663829] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f94b4247000 00:12:18.382 [2024-11-20 07:14:21.664787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.665788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.666792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.667798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.668805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.669812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.670818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.671824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:18.382 [2024-11-20 07:14:21.672829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:18.382 [2024-11-20 07:14:21.672849] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f94b423c000 00:12:18.382 [2024-11-20 07:14:21.673966] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:18.382 [2024-11-20 07:14:21.687898] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:18.382 [2024-11-20 07:14:21.687946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:18.382 [2024-11-20 07:14:21.696974] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:18.382 [2024-11-20 07:14:21.697030] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:18.382 [2024-11-20 07:14:21.697121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:18.382 [2024-11-20 07:14:21.697155] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:18.382 [2024-11-20 07:14:21.697166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:18.382 [2024-11-20 07:14:21.697974] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:18.382 [2024-11-20 07:14:21.697995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:18.382 [2024-11-20 07:14:21.698008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:18.382 [2024-11-20 07:14:21.698981] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:18.382 [2024-11-20 07:14:21.699000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:18.382 [2024-11-20 07:14:21.699013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:18.382 [2024-11-20 07:14:21.699985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:18.382 [2024-11-20 07:14:21.700008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:18.382 [2024-11-20 07:14:21.700991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:18.382 [2024-11-20 07:14:21.701011] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:18.382 [2024-11-20 07:14:21.701020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:18.382 [2024-11-20 07:14:21.701031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:18.382 [2024-11-20 07:14:21.701141] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:18.382 [2024-11-20 07:14:21.701148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:18.382 [2024-11-20 07:14:21.701157] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:18.382 [2024-11-20 07:14:21.701999] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:18.382 [2024-11-20 07:14:21.702995] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:18.382 [2024-11-20 07:14:21.704007] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:18.382 [2024-11-20 07:14:21.705000] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.382 [2024-11-20 07:14:21.705109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:18.382 [2024-11-20 07:14:21.706013] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:18.382 [2024-11-20 07:14:21.706031] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:18.382 [2024-11-20 07:14:21.706040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:18.382 [2024-11-20 07:14:21.706063] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:18.382 [2024-11-20 07:14:21.706081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:18.382 [2024-11-20 07:14:21.706111] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.382 [2024-11-20 07:14:21.706121] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.382 [2024-11-20 07:14:21.706129] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.383 [2024-11-20 07:14:21.706150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706222] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:18.383 [2024-11-20 07:14:21.706230] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:18.383 [2024-11-20 07:14:21.706245] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:18.383 [2024-11-20 07:14:21.706254] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:18.383 [2024-11-20 07:14:21.706265] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:18.383 [2024-11-20 07:14:21.706274] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:18.383 [2024-11-20 07:14:21.706296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.383 [2024-11-20 07:14:21.706387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.383 [2024-11-20 07:14:21.706398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.383 [2024-11-20 07:14:21.706410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.383 [2024-11-20 07:14:21.706418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706470] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:18.383 [2024-11-20 07:14:21.706481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706643] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:18.383 [2024-11-20 07:14:21.706653] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:18.383 [2024-11-20 07:14:21.706658] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.383 [2024-11-20 07:14:21.706668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706702] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:18.383 [2024-11-20 07:14:21.706723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706750] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.383 [2024-11-20 07:14:21.706758] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.383 [2024-11-20 07:14:21.706763] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.383 [2024-11-20 07:14:21.706772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706852] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:18.383 [2024-11-20 07:14:21.706860] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.383 [2024-11-20 07:14:21.706866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.383 [2024-11-20 07:14:21.706874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.706888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.706902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706967] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:18.383 [2024-11-20 07:14:21.706974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:18.383 [2024-11-20 07:14:21.706983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:18.383 [2024-11-20 07:14:21.707011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.707029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.707049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.707060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.707075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:18.383 [2024-11-20 07:14:21.707086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:18.383 [2024-11-20 07:14:21.707101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:18.384 [2024-11-20 07:14:21.707112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:18.384 [2024-11-20 07:14:21.707135] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:18.384 [2024-11-20 07:14:21.707145] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:18.384 [2024-11-20 07:14:21.707151] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:18.384 [2024-11-20 07:14:21.707156] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:18.384 [2024-11-20 07:14:21.707162] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:18.384 [2024-11-20 07:14:21.707171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:18.384 [2024-11-20 07:14:21.707182] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:18.384 [2024-11-20 07:14:21.707190] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:18.384 [2024-11-20 07:14:21.707196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.384 [2024-11-20 07:14:21.707204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:18.384 [2024-11-20 07:14:21.707215] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:18.384 [2024-11-20 07:14:21.707223] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:18.384 [2024-11-20 07:14:21.707228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.384 [2024-11-20 07:14:21.707237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:18.384 [2024-11-20 07:14:21.707249] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:18.384 [2024-11-20 07:14:21.707257] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:18.384 [2024-11-20 07:14:21.707262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:18.384 [2024-11-20 07:14:21.707271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:18.384 [2024-11-20 07:14:21.707308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:18.384 [2024-11-20 07:14:21.707331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:18.384 [2024-11-20 07:14:21.707352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:18.384 [2024-11-20 07:14:21.707365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:18.384 ===================================================== 00:12:18.384 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:18.384 ===================================================== 00:12:18.384 Controller Capabilities/Features 00:12:18.384 ================================ 00:12:18.384 Vendor ID: 4e58 00:12:18.384 Subsystem Vendor ID: 4e58 00:12:18.384 Serial Number: SPDK1 00:12:18.384 Model Number: SPDK bdev Controller 00:12:18.384 Firmware Version: 25.01 00:12:18.384 Recommended Arb Burst: 6 00:12:18.384 IEEE OUI Identifier: 8d 6b 50 00:12:18.384 Multi-path I/O 00:12:18.384 May have multiple subsystem ports: Yes 00:12:18.384 May have multiple controllers: Yes 00:12:18.384 Associated with SR-IOV VF: No 00:12:18.384 Max Data Transfer Size: 131072 00:12:18.384 Max Number of Namespaces: 32 00:12:18.384 Max Number of I/O Queues: 127 00:12:18.384 NVMe Specification Version (VS): 1.3 00:12:18.384 NVMe Specification Version (Identify): 1.3 00:12:18.384 Maximum Queue Entries: 256 00:12:18.384 Contiguous Queues Required: Yes 00:12:18.384 Arbitration Mechanisms Supported 00:12:18.384 Weighted Round Robin: Not Supported 00:12:18.384 Vendor Specific: Not Supported 00:12:18.384 Reset Timeout: 15000 ms 00:12:18.384 Doorbell Stride: 4 bytes 00:12:18.384 NVM Subsystem Reset: Not Supported 00:12:18.384 Command Sets Supported 00:12:18.384 NVM Command Set: Supported 00:12:18.384 Boot Partition: Not Supported 00:12:18.384 Memory Page Size Minimum: 4096 bytes 00:12:18.384 Memory Page Size Maximum: 4096 bytes 00:12:18.384 Persistent Memory Region: Not Supported 00:12:18.384 Optional Asynchronous Events Supported 00:12:18.384 Namespace Attribute Notices: Supported 00:12:18.384 Firmware Activation Notices: Not Supported 00:12:18.384 ANA Change Notices: Not Supported 00:12:18.384 PLE Aggregate Log Change Notices: Not Supported 00:12:18.384 LBA Status Info Alert Notices: Not Supported 00:12:18.384 EGE Aggregate Log Change Notices: Not Supported 00:12:18.384 Normal NVM Subsystem Shutdown event: Not Supported 00:12:18.384 Zone Descriptor Change Notices: Not Supported 00:12:18.384 Discovery Log Change Notices: Not Supported 00:12:18.384 Controller Attributes 00:12:18.384 128-bit Host Identifier: Supported 00:12:18.384 Non-Operational Permissive Mode: Not Supported 00:12:18.384 NVM Sets: Not Supported 00:12:18.384 Read Recovery Levels: Not Supported 00:12:18.384 Endurance Groups: Not Supported 00:12:18.384 Predictable Latency Mode: Not Supported 00:12:18.384 Traffic Based Keep ALive: Not Supported 00:12:18.384 Namespace Granularity: Not Supported 00:12:18.384 SQ Associations: Not Supported 00:12:18.384 UUID List: Not Supported 00:12:18.384 Multi-Domain Subsystem: Not Supported 00:12:18.384 Fixed Capacity Management: Not Supported 00:12:18.384 Variable Capacity Management: Not Supported 00:12:18.384 Delete Endurance Group: Not Supported 00:12:18.384 Delete NVM Set: Not Supported 00:12:18.384 Extended LBA Formats Supported: Not Supported 00:12:18.384 Flexible Data Placement Supported: Not Supported 00:12:18.384 00:12:18.384 Controller Memory Buffer Support 00:12:18.384 ================================ 00:12:18.384 Supported: No 00:12:18.384 00:12:18.384 Persistent Memory Region Support 00:12:18.384 ================================ 00:12:18.384 Supported: No 00:12:18.384 00:12:18.384 Admin Command Set Attributes 00:12:18.384 ============================ 00:12:18.384 Security Send/Receive: Not Supported 00:12:18.384 Format NVM: Not Supported 00:12:18.384 Firmware Activate/Download: Not Supported 00:12:18.384 Namespace Management: Not Supported 00:12:18.384 Device Self-Test: Not Supported 00:12:18.384 Directives: Not Supported 00:12:18.384 NVMe-MI: Not Supported 00:12:18.384 Virtualization Management: Not Supported 00:12:18.384 Doorbell Buffer Config: Not Supported 00:12:18.384 Get LBA Status Capability: Not Supported 00:12:18.384 Command & Feature Lockdown Capability: Not Supported 00:12:18.384 Abort Command Limit: 4 00:12:18.384 Async Event Request Limit: 4 00:12:18.384 Number of Firmware Slots: N/A 00:12:18.384 Firmware Slot 1 Read-Only: N/A 00:12:18.384 Firmware Activation Without Reset: N/A 00:12:18.385 Multiple Update Detection Support: N/A 00:12:18.385 Firmware Update Granularity: No Information Provided 00:12:18.385 Per-Namespace SMART Log: No 00:12:18.385 Asymmetric Namespace Access Log Page: Not Supported 00:12:18.385 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:18.385 Command Effects Log Page: Supported 00:12:18.385 Get Log Page Extended Data: Supported 00:12:18.385 Telemetry Log Pages: Not Supported 00:12:18.385 Persistent Event Log Pages: Not Supported 00:12:18.385 Supported Log Pages Log Page: May Support 00:12:18.385 Commands Supported & Effects Log Page: Not Supported 00:12:18.385 Feature Identifiers & Effects Log Page:May Support 00:12:18.385 NVMe-MI Commands & Effects Log Page: May Support 00:12:18.385 Data Area 4 for Telemetry Log: Not Supported 00:12:18.385 Error Log Page Entries Supported: 128 00:12:18.385 Keep Alive: Supported 00:12:18.385 Keep Alive Granularity: 10000 ms 00:12:18.385 00:12:18.385 NVM Command Set Attributes 00:12:18.385 ========================== 00:12:18.385 Submission Queue Entry Size 00:12:18.385 Max: 64 00:12:18.385 Min: 64 00:12:18.385 Completion Queue Entry Size 00:12:18.385 Max: 16 00:12:18.385 Min: 16 00:12:18.385 Number of Namespaces: 32 00:12:18.385 Compare Command: Supported 00:12:18.385 Write Uncorrectable Command: Not Supported 00:12:18.385 Dataset Management Command: Supported 00:12:18.385 Write Zeroes Command: Supported 00:12:18.385 Set Features Save Field: Not Supported 00:12:18.385 Reservations: Not Supported 00:12:18.385 Timestamp: Not Supported 00:12:18.385 Copy: Supported 00:12:18.385 Volatile Write Cache: Present 00:12:18.385 Atomic Write Unit (Normal): 1 00:12:18.385 Atomic Write Unit (PFail): 1 00:12:18.385 Atomic Compare & Write Unit: 1 00:12:18.385 Fused Compare & Write: Supported 00:12:18.385 Scatter-Gather List 00:12:18.385 SGL Command Set: Supported (Dword aligned) 00:12:18.385 SGL Keyed: Not Supported 00:12:18.385 SGL Bit Bucket Descriptor: Not Supported 00:12:18.385 SGL Metadata Pointer: Not Supported 00:12:18.385 Oversized SGL: Not Supported 00:12:18.385 SGL Metadata Address: Not Supported 00:12:18.385 SGL Offset: Not Supported 00:12:18.385 Transport SGL Data Block: Not Supported 00:12:18.385 Replay Protected Memory Block: Not Supported 00:12:18.385 00:12:18.385 Firmware Slot Information 00:12:18.385 ========================= 00:12:18.385 Active slot: 1 00:12:18.385 Slot 1 Firmware Revision: 25.01 00:12:18.385 00:12:18.385 00:12:18.385 Commands Supported and Effects 00:12:18.385 ============================== 00:12:18.385 Admin Commands 00:12:18.385 -------------- 00:12:18.385 Get Log Page (02h): Supported 00:12:18.385 Identify (06h): Supported 00:12:18.385 Abort (08h): Supported 00:12:18.385 Set Features (09h): Supported 00:12:18.385 Get Features (0Ah): Supported 00:12:18.385 Asynchronous Event Request (0Ch): Supported 00:12:18.385 Keep Alive (18h): Supported 00:12:18.385 I/O Commands 00:12:18.385 ------------ 00:12:18.385 Flush (00h): Supported LBA-Change 00:12:18.385 Write (01h): Supported LBA-Change 00:12:18.385 Read (02h): Supported 00:12:18.385 Compare (05h): Supported 00:12:18.385 Write Zeroes (08h): Supported LBA-Change 00:12:18.385 Dataset Management (09h): Supported LBA-Change 00:12:18.385 Copy (19h): Supported LBA-Change 00:12:18.385 00:12:18.385 Error Log 00:12:18.385 ========= 00:12:18.385 00:12:18.385 Arbitration 00:12:18.385 =========== 00:12:18.385 Arbitration Burst: 1 00:12:18.385 00:12:18.385 Power Management 00:12:18.385 ================ 00:12:18.385 Number of Power States: 1 00:12:18.385 Current Power State: Power State #0 00:12:18.385 Power State #0: 00:12:18.385 Max Power: 0.00 W 00:12:18.385 Non-Operational State: Operational 00:12:18.385 Entry Latency: Not Reported 00:12:18.385 Exit Latency: Not Reported 00:12:18.385 Relative Read Throughput: 0 00:12:18.385 Relative Read Latency: 0 00:12:18.385 Relative Write Throughput: 0 00:12:18.385 Relative Write Latency: 0 00:12:18.385 Idle Power: Not Reported 00:12:18.385 Active Power: Not Reported 00:12:18.385 Non-Operational Permissive Mode: Not Supported 00:12:18.385 00:12:18.385 Health Information 00:12:18.385 ================== 00:12:18.385 Critical Warnings: 00:12:18.385 Available Spare Space: OK 00:12:18.385 Temperature: OK 00:12:18.385 Device Reliability: OK 00:12:18.385 Read Only: No 00:12:18.385 Volatile Memory Backup: OK 00:12:18.385 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:18.385 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:18.385 Available Spare: 0% 00:12:18.385 Available Sp[2024-11-20 07:14:21.707501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:18.385 [2024-11-20 07:14:21.707518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:18.385 [2024-11-20 07:14:21.707566] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:18.385 [2024-11-20 07:14:21.707584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.385 [2024-11-20 07:14:21.707610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.385 [2024-11-20 07:14:21.707619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.385 [2024-11-20 07:14:21.707628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.385 [2024-11-20 07:14:21.708024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:18.385 [2024-11-20 07:14:21.708044] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:18.385 [2024-11-20 07:14:21.709023] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.385 [2024-11-20 07:14:21.709116] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:18.385 [2024-11-20 07:14:21.709131] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:18.385 [2024-11-20 07:14:21.710039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:18.385 [2024-11-20 07:14:21.710062] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:18.385 [2024-11-20 07:14:21.710118] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:18.385 [2024-11-20 07:14:21.713328] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:18.385 are Threshold: 0% 00:12:18.385 Life Percentage Used: 0% 00:12:18.385 Data Units Read: 0 00:12:18.386 Data Units Written: 0 00:12:18.386 Host Read Commands: 0 00:12:18.386 Host Write Commands: 0 00:12:18.386 Controller Busy Time: 0 minutes 00:12:18.386 Power Cycles: 0 00:12:18.386 Power On Hours: 0 hours 00:12:18.386 Unsafe Shutdowns: 0 00:12:18.386 Unrecoverable Media Errors: 0 00:12:18.386 Lifetime Error Log Entries: 0 00:12:18.386 Warning Temperature Time: 0 minutes 00:12:18.386 Critical Temperature Time: 0 minutes 00:12:18.386 00:12:18.386 Number of Queues 00:12:18.386 ================ 00:12:18.386 Number of I/O Submission Queues: 127 00:12:18.386 Number of I/O Completion Queues: 127 00:12:18.386 00:12:18.386 Active Namespaces 00:12:18.386 ================= 00:12:18.386 Namespace ID:1 00:12:18.386 Error Recovery Timeout: Unlimited 00:12:18.386 Command Set Identifier: NVM (00h) 00:12:18.386 Deallocate: Supported 00:12:18.386 Deallocated/Unwritten Error: Not Supported 00:12:18.386 Deallocated Read Value: Unknown 00:12:18.386 Deallocate in Write Zeroes: Not Supported 00:12:18.386 Deallocated Guard Field: 0xFFFF 00:12:18.386 Flush: Supported 00:12:18.386 Reservation: Supported 00:12:18.386 Namespace Sharing Capabilities: Multiple Controllers 00:12:18.386 Size (in LBAs): 131072 (0GiB) 00:12:18.386 Capacity (in LBAs): 131072 (0GiB) 00:12:18.386 Utilization (in LBAs): 131072 (0GiB) 00:12:18.386 NGUID: 699BE2B6E14A42F79CCFFE72837B449F 00:12:18.386 UUID: 699be2b6-e14a-42f7-9ccf-fe72837b449f 00:12:18.386 Thin Provisioning: Not Supported 00:12:18.386 Per-NS Atomic Units: Yes 00:12:18.386 Atomic Boundary Size (Normal): 0 00:12:18.386 Atomic Boundary Size (PFail): 0 00:12:18.386 Atomic Boundary Offset: 0 00:12:18.386 Maximum Single Source Range Length: 65535 00:12:18.386 Maximum Copy Length: 65535 00:12:18.386 Maximum Source Range Count: 1 00:12:18.386 NGUID/EUI64 Never Reused: No 00:12:18.386 Namespace Write Protected: No 00:12:18.386 Number of LBA Formats: 1 00:12:18.386 Current LBA Format: LBA Format #00 00:12:18.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:18.386 00:12:18.386 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:18.642 [2024-11-20 07:14:21.965221] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.910 Initializing NVMe Controllers 00:12:23.910 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:23.910 Initialization complete. Launching workers. 00:12:23.910 ======================================================== 00:12:23.910 Latency(us) 00:12:23.910 Device Information : IOPS MiB/s Average min max 00:12:23.910 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33341.57 130.24 3838.48 1171.43 10360.64 00:12:23.910 ======================================================== 00:12:23.910 Total : 33341.57 130.24 3838.48 1171.43 10360.64 00:12:23.910 00:12:23.910 [2024-11-20 07:14:26.986741] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.910 07:14:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:23.910 [2024-11-20 07:14:27.250970] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.241 Initializing NVMe Controllers 00:12:29.241 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:29.241 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:29.241 Initialization complete. Launching workers. 00:12:29.241 ======================================================== 00:12:29.241 Latency(us) 00:12:29.241 Device Information : IOPS MiB/s Average min max 00:12:29.241 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.45 62.67 7978.03 5939.71 9989.71 00:12:29.241 ======================================================== 00:12:29.241 Total : 16042.45 62.67 7978.03 5939.71 9989.71 00:12:29.241 00:12:29.241 [2024-11-20 07:14:32.287517] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.241 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:29.241 [2024-11-20 07:14:32.518664] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:34.506 [2024-11-20 07:14:37.599659] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:34.506 Initializing NVMe Controllers 00:12:34.506 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:34.506 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:34.506 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:34.506 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:34.506 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:34.506 Initialization complete. Launching workers. 00:12:34.506 Starting thread on core 2 00:12:34.506 Starting thread on core 3 00:12:34.506 Starting thread on core 1 00:12:34.506 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:34.506 [2024-11-20 07:14:37.915797] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.700 [2024-11-20 07:14:41.460664] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.700 Initializing NVMe Controllers 00:12:38.700 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.700 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:38.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:38.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:38.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:38.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:38.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:38.700 Initialization complete. Launching workers. 00:12:38.700 Starting thread on core 1 with urgent priority queue 00:12:38.700 Starting thread on core 2 with urgent priority queue 00:12:38.700 Starting thread on core 3 with urgent priority queue 00:12:38.700 Starting thread on core 0 with urgent priority queue 00:12:38.700 SPDK bdev Controller (SPDK1 ) core 0: 2254.33 IO/s 44.36 secs/100000 ios 00:12:38.700 SPDK bdev Controller (SPDK1 ) core 1: 2245.33 IO/s 44.54 secs/100000 ios 00:12:38.700 SPDK bdev Controller (SPDK1 ) core 2: 2283.00 IO/s 43.80 secs/100000 ios 00:12:38.700 SPDK bdev Controller (SPDK1 ) core 3: 2232.00 IO/s 44.80 secs/100000 ios 00:12:38.700 ======================================================== 00:12:38.700 00:12:38.700 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:38.700 [2024-11-20 07:14:41.768965] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:38.700 Initializing NVMe Controllers 00:12:38.700 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.700 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:38.700 Namespace ID: 1 size: 0GB 00:12:38.700 Initialization complete. 00:12:38.700 INFO: using host memory buffer for IO 00:12:38.700 Hello world! 00:12:38.700 [2024-11-20 07:14:41.802496] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:38.700 07:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:38.700 [2024-11-20 07:14:42.114809] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:40.079 Initializing NVMe Controllers 00:12:40.079 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.079 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.079 Initialization complete. Launching workers. 00:12:40.079 submit (in ns) avg, min, max = 7379.7, 3557.8, 4020452.2 00:12:40.079 complete (in ns) avg, min, max = 27641.5, 2080.0, 4016186.7 00:12:40.080 00:12:40.080 Submit histogram 00:12:40.080 ================ 00:12:40.080 Range in us Cumulative Count 00:12:40.080 3.556 - 3.579: 0.3950% ( 51) 00:12:40.080 3.579 - 3.603: 3.5783% ( 411) 00:12:40.080 3.603 - 3.627: 9.1472% ( 719) 00:12:40.080 3.627 - 3.650: 18.1706% ( 1165) 00:12:40.080 3.650 - 3.674: 27.8445% ( 1249) 00:12:40.080 3.674 - 3.698: 37.5184% ( 1249) 00:12:40.080 3.698 - 3.721: 44.9384% ( 958) 00:12:40.080 3.721 - 3.745: 50.6700% ( 740) 00:12:40.080 3.745 - 3.769: 55.2707% ( 594) 00:12:40.080 3.769 - 3.793: 59.4532% ( 540) 00:12:40.080 3.793 - 3.816: 63.0315% ( 462) 00:12:40.080 3.816 - 3.840: 66.4550% ( 442) 00:12:40.080 3.840 - 3.864: 70.6065% ( 536) 00:12:40.080 3.864 - 3.887: 75.0523% ( 574) 00:12:40.080 3.887 - 3.911: 79.5910% ( 586) 00:12:40.080 3.911 - 3.935: 83.3398% ( 484) 00:12:40.080 3.935 - 3.959: 86.1359% ( 361) 00:12:40.080 3.959 - 3.982: 88.1341% ( 258) 00:12:40.080 3.982 - 4.006: 89.8923% ( 227) 00:12:40.080 4.006 - 4.030: 91.3640% ( 190) 00:12:40.080 4.030 - 4.053: 92.4560% ( 141) 00:12:40.080 4.053 - 4.077: 93.5636% ( 143) 00:12:40.080 4.077 - 4.101: 94.4001% ( 108) 00:12:40.080 4.101 - 4.124: 95.0352% ( 82) 00:12:40.080 4.124 - 4.148: 95.5232% ( 63) 00:12:40.080 4.148 - 4.172: 95.9647% ( 57) 00:12:40.080 4.172 - 4.196: 96.2745% ( 40) 00:12:40.080 4.196 - 4.219: 96.4914% ( 28) 00:12:40.080 4.219 - 4.243: 96.6540% ( 21) 00:12:40.080 4.243 - 4.267: 96.7934% ( 18) 00:12:40.080 4.267 - 4.290: 96.8864% ( 12) 00:12:40.080 4.290 - 4.314: 96.9716% ( 11) 00:12:40.080 4.314 - 4.338: 97.0723% ( 13) 00:12:40.080 4.338 - 4.361: 97.1342% ( 8) 00:12:40.080 4.361 - 4.385: 97.1807% ( 6) 00:12:40.080 4.385 - 4.409: 97.2039% ( 3) 00:12:40.080 4.409 - 4.433: 97.2117% ( 1) 00:12:40.080 4.433 - 4.456: 97.2349% ( 3) 00:12:40.080 4.456 - 4.480: 97.2504% ( 2) 00:12:40.080 4.480 - 4.504: 97.2659% ( 2) 00:12:40.080 4.504 - 4.527: 97.2736% ( 1) 00:12:40.080 4.527 - 4.551: 97.2891% ( 2) 00:12:40.080 4.551 - 4.575: 97.2969% ( 1) 00:12:40.080 4.575 - 4.599: 97.3124% ( 2) 00:12:40.080 4.622 - 4.646: 97.3434% ( 4) 00:12:40.080 4.646 - 4.670: 97.3821% ( 5) 00:12:40.080 4.670 - 4.693: 97.4285% ( 6) 00:12:40.080 4.693 - 4.717: 97.5292% ( 13) 00:12:40.080 4.717 - 4.741: 97.5525% ( 3) 00:12:40.080 4.741 - 4.764: 97.6067% ( 7) 00:12:40.080 4.764 - 4.788: 97.6377% ( 4) 00:12:40.080 4.788 - 4.812: 97.6841% ( 6) 00:12:40.080 4.812 - 4.836: 97.7151% ( 4) 00:12:40.080 4.836 - 4.859: 97.7461% ( 4) 00:12:40.080 4.859 - 4.883: 97.7771% ( 4) 00:12:40.080 4.883 - 4.907: 97.8236% ( 6) 00:12:40.080 4.907 - 4.930: 97.8778% ( 7) 00:12:40.080 4.930 - 4.954: 97.9088% ( 4) 00:12:40.080 4.954 - 4.978: 97.9243% ( 2) 00:12:40.080 4.978 - 5.001: 97.9862% ( 8) 00:12:40.080 5.001 - 5.025: 98.0094% ( 3) 00:12:40.080 5.025 - 5.049: 98.0559% ( 6) 00:12:40.080 5.049 - 5.073: 98.0792% ( 3) 00:12:40.080 5.073 - 5.096: 98.1179% ( 5) 00:12:40.080 5.096 - 5.120: 98.1334% ( 2) 00:12:40.080 5.120 - 5.144: 98.1489% ( 2) 00:12:40.080 5.191 - 5.215: 98.1566% ( 1) 00:12:40.080 5.215 - 5.239: 98.1721% ( 2) 00:12:40.080 5.239 - 5.262: 98.1798% ( 1) 00:12:40.080 5.262 - 5.286: 98.2108% ( 4) 00:12:40.080 5.381 - 5.404: 98.2186% ( 1) 00:12:40.080 5.404 - 5.428: 98.2341% ( 2) 00:12:40.080 5.452 - 5.476: 98.2418% ( 1) 00:12:40.080 5.476 - 5.499: 98.2496% ( 1) 00:12:40.080 5.547 - 5.570: 98.2573% ( 1) 00:12:40.080 5.570 - 5.594: 98.2728% ( 2) 00:12:40.080 5.618 - 5.641: 98.2805% ( 1) 00:12:40.080 5.760 - 5.784: 98.2883% ( 1) 00:12:40.080 5.784 - 5.807: 98.3038% ( 2) 00:12:40.080 5.973 - 5.997: 98.3115% ( 1) 00:12:40.080 6.068 - 6.116: 98.3580% ( 6) 00:12:40.080 6.116 - 6.163: 98.3657% ( 1) 00:12:40.080 6.163 - 6.210: 98.3735% ( 1) 00:12:40.080 6.258 - 6.305: 98.3812% ( 1) 00:12:40.080 6.827 - 6.874: 98.3890% ( 1) 00:12:40.080 6.874 - 6.921: 98.3967% ( 1) 00:12:40.080 6.921 - 6.969: 98.4122% ( 2) 00:12:40.080 7.016 - 7.064: 98.4200% ( 1) 00:12:40.080 7.111 - 7.159: 98.4354% ( 2) 00:12:40.080 7.159 - 7.206: 98.4432% ( 1) 00:12:40.080 7.253 - 7.301: 98.4509% ( 1) 00:12:40.080 7.348 - 7.396: 98.4587% ( 1) 00:12:40.080 7.443 - 7.490: 98.4742% ( 2) 00:12:40.080 7.538 - 7.585: 98.4819% ( 1) 00:12:40.080 7.633 - 7.680: 98.4897% ( 1) 00:12:40.080 7.775 - 7.822: 98.5052% ( 2) 00:12:40.080 7.822 - 7.870: 98.5206% ( 2) 00:12:40.080 7.870 - 7.917: 98.5361% ( 2) 00:12:40.080 8.012 - 8.059: 98.5439% ( 1) 00:12:40.080 8.059 - 8.107: 98.5749% ( 4) 00:12:40.080 8.107 - 8.154: 98.5981% ( 3) 00:12:40.080 8.154 - 8.201: 98.6058% ( 1) 00:12:40.080 8.201 - 8.249: 98.6136% ( 1) 00:12:40.080 8.391 - 8.439: 98.6213% ( 1) 00:12:40.080 8.439 - 8.486: 98.6368% ( 2) 00:12:40.080 8.533 - 8.581: 98.6446% ( 1) 00:12:40.080 8.581 - 8.628: 98.6523% ( 1) 00:12:40.080 8.865 - 8.913: 98.6678% ( 2) 00:12:40.080 9.007 - 9.055: 98.6755% ( 1) 00:12:40.080 9.102 - 9.150: 98.6833% ( 1) 00:12:40.080 9.244 - 9.292: 98.6910% ( 1) 00:12:40.080 9.292 - 9.339: 98.6988% ( 1) 00:12:40.080 9.576 - 9.624: 98.7065% ( 1) 00:12:40.080 9.671 - 9.719: 98.7143% ( 1) 00:12:40.080 10.050 - 10.098: 98.7298% ( 2) 00:12:40.080 10.098 - 10.145: 98.7375% ( 1) 00:12:40.080 10.145 - 10.193: 98.7453% ( 1) 00:12:40.080 10.382 - 10.430: 98.7607% ( 2) 00:12:40.080 10.430 - 10.477: 98.7685% ( 1) 00:12:40.080 10.856 - 10.904: 98.7762% ( 1) 00:12:40.080 10.951 - 10.999: 98.7840% ( 1) 00:12:40.080 10.999 - 11.046: 98.7917% ( 1) 00:12:40.080 11.046 - 11.093: 98.8072% ( 2) 00:12:40.080 11.093 - 11.141: 98.8150% ( 1) 00:12:40.080 11.236 - 11.283: 98.8227% ( 1) 00:12:40.080 11.757 - 11.804: 98.8305% ( 1) 00:12:40.080 11.994 - 12.041: 98.8382% ( 1) 00:12:40.080 12.041 - 12.089: 98.8459% ( 1) 00:12:40.080 12.231 - 12.326: 98.8537% ( 1) 00:12:40.080 12.610 - 12.705: 98.8614% ( 1) 00:12:40.080 12.895 - 12.990: 98.8769% ( 2) 00:12:40.080 14.127 - 14.222: 98.8847% ( 1) 00:12:40.080 14.507 - 14.601: 98.8924% ( 1) 00:12:40.080 14.601 - 14.696: 98.9002% ( 1) 00:12:40.080 14.886 - 14.981: 98.9079% ( 1) 00:12:40.080 15.170 - 15.265: 98.9157% ( 1) 00:12:40.080 15.265 - 15.360: 98.9234% ( 1) 00:12:40.080 17.067 - 17.161: 98.9311% ( 1) 00:12:40.080 17.161 - 17.256: 98.9466% ( 2) 00:12:40.080 17.256 - 17.351: 98.9621% ( 2) 00:12:40.080 17.351 - 17.446: 98.9699% ( 1) 00:12:40.080 17.446 - 17.541: 98.9854% ( 2) 00:12:40.080 17.541 - 17.636: 99.0163% ( 4) 00:12:40.080 17.636 - 17.730: 99.0861% ( 9) 00:12:40.080 17.730 - 17.825: 99.1325% ( 6) 00:12:40.080 17.825 - 17.920: 99.1790% ( 6) 00:12:40.080 17.920 - 18.015: 99.2332% ( 7) 00:12:40.080 18.015 - 18.110: 99.2874% ( 7) 00:12:40.080 18.110 - 18.204: 99.3649% ( 10) 00:12:40.080 18.204 - 18.299: 99.4501% ( 11) 00:12:40.080 18.299 - 18.394: 99.5120% ( 8) 00:12:40.080 18.394 - 18.489: 99.5508% ( 5) 00:12:40.080 18.489 - 18.584: 99.6360% ( 11) 00:12:40.080 18.584 - 18.679: 99.6747% ( 5) 00:12:40.080 18.679 - 18.773: 99.7212% ( 6) 00:12:40.080 18.773 - 18.868: 99.7521% ( 4) 00:12:40.080 18.868 - 18.963: 99.7599% ( 1) 00:12:40.080 18.963 - 19.058: 99.7986% ( 5) 00:12:40.080 19.247 - 19.342: 99.8141% ( 2) 00:12:40.080 19.532 - 19.627: 99.8219% ( 1) 00:12:40.080 19.627 - 19.721: 99.8296% ( 1) 00:12:40.080 19.721 - 19.816: 99.8451% ( 2) 00:12:40.080 20.290 - 20.385: 99.8528% ( 1) 00:12:40.080 21.428 - 21.523: 99.8606% ( 1) 00:12:40.080 23.040 - 23.135: 99.8761% ( 2) 00:12:40.080 26.169 - 26.359: 99.8916% ( 2) 00:12:40.080 28.065 - 28.255: 99.8993% ( 1) 00:12:40.080 28.444 - 28.634: 99.9071% ( 1) 00:12:40.080 28.824 - 29.013: 99.9148% ( 1) 00:12:40.080 3980.705 - 4004.978: 99.9845% ( 9) 00:12:40.080 4004.978 - 4029.250: 100.0000% ( 2) 00:12:40.080 00:12:40.080 Complete histogram 00:12:40.080 ================== 00:12:40.080 Range in us Cumulative Count 00:12:40.080 2.074 - 2.086: 1.3632% ( 176) 00:12:40.080 2.086 - 2.098: 30.6096% ( 3776) 00:12:40.080 2.098 - 2.110: 43.5907% ( 1676) 00:12:40.080 2.110 - 2.121: 46.9987% ( 440) 00:12:40.080 2.121 - 2.133: 54.9377% ( 1025) 00:12:40.081 2.133 - 2.145: 57.4239% ( 321) 00:12:40.081 2.145 - 2.157: 60.8241% ( 439) 00:12:40.081 2.157 - 2.169: 72.7519% ( 1540) 00:12:40.081 2.169 - 2.181: 75.5867% ( 366) 00:12:40.081 2.181 - 2.193: 77.1203% ( 198) 00:12:40.081 2.193 - 2.204: 79.9861% ( 370) 00:12:40.081 2.204 - 2.216: 80.7993% ( 105) 00:12:40.081 2.216 - 2.228: 82.0928% ( 167) 00:12:40.081 2.228 - 2.240: 86.7942% ( 607) 00:12:40.081 2.240 - 2.252: 89.7607% ( 383) 00:12:40.081 2.252 - 2.264: 91.3562% ( 206) 00:12:40.081 2.264 - 2.276: 93.0060% ( 213) 00:12:40.081 2.276 - 2.287: 93.4939% ( 63) 00:12:40.081 2.287 - 2.299: 93.7960% ( 39) 00:12:40.081 2.299 - 2.311: 94.2685% ( 61) 00:12:40.081 2.311 - 2.323: 94.8571% ( 76) 00:12:40.081 2.323 - 2.335: 95.2444% ( 50) 00:12:40.081 2.335 - 2.347: 95.4225% ( 23) 00:12:40.081 2.347 - 2.359: 95.4535% ( 4) 00:12:40.081 2.359 - 2.370: 95.5232% ( 9) 00:12:40.081 2.370 - 2.382: 95.6936% ( 22) 00:12:40.081 2.382 - 2.394: 95.9647% ( 35) 00:12:40.081 2.394 - 2.406: 96.5223% ( 72) 00:12:40.081 2.406 - 2.418: 96.8167% ( 38) 00:12:40.081 2.418 - 2.430: 97.0258% ( 27) 00:12:40.081 2.430 - 2.441: 97.1652% ( 18) 00:12:40.081 2.441 - 2.453: 97.2814% ( 15) 00:12:40.081 2.453 - 2.465: 97.4595% ( 23) 00:12:40.081 2.465 - 2.477: 97.6067% ( 19) 00:12:40.081 2.477 - 2.489: 97.7539% ( 19) 00:12:40.081 2.489 - 2.501: 97.8391% ( 11) 00:12:40.081 2.501 - 2.513: 97.8933% ( 7) 00:12:40.081 2.513 - 2.524: 97.9397% ( 6) 00:12:40.081 2.524 - 2.536: 97.9940% ( 7) 00:12:40.081 2.536 - 2.548: 98.0327% ( 5) 00:12:40.081 2.548 - 2.560: 98.0559% ( 3) 00:12:40.081 2.560 - 2.572: 98.0792% ( 3) 00:12:40.081 2.572 - 2.584: 98.0946% ( 2) 00:12:40.081 2.619 - 2.631: 98.1256% ( 4) 00:12:40.081 2.655 - 2.667: 98.1489% ( 3) 00:12:40.081 2.667 - 2.679: 98.1566% ( 1) 00:12:40.081 2.679 - 2.690: 98.1644% ( 1) 00:12:40.081 2.690 - 2.702: 98.1876% ( 3) 00:12:40.081 2.702 - 2.714: 98.1953% ( 1) 00:12:40.081 2.714 - 2.726: 98.2031% ( 1) 00:12:40.081 2.726 - 2.738: 98.2108% ( 1) 00:12:40.081 2.738 - 2.750: 98.2263% ( 2) 00:12:40.081 2.761 - 2.773: 98.2341% ( 1) 00:12:40.081 2.773 - 2.785: 98.2496% ( 2) 00:12:40.081 2.785 - 2.797: 98.2573% ( 1) 00:12:40.081 2.821 - 2.833: 98.2650% ( 1) 00:12:40.081 2.833 - 2.844: 98.2728% ( 1) 00:12:40.081 2.844 - 2.856: 98.2805% ( 1) 00:12:40.081 2.868 - 2.880: 98.2960% ( 2) 00:12:40.081 2.904 - 2.916: 98.3038% ( 1) 00:12:40.081 2.927 - 2.939: 98.3193% ( 2) 00:12:40.081 2.987 - 2.999: 98.3270% ( 1) 00:12:40.081 2.999 - 3.010: 98.3348% ( 1) 00:12:40.081 3.010 - 3.022: 98.3502% ( 2) 00:12:40.081 3.081 - 3.105: 98.3580% ( 1) 00:12:40.081 3.200 - 3.224: 98.3735% ( 2) 00:12:40.081 3.271 - 3.295: 98.3812% ( 1) 00:12:40.081 3.319 - 3.342: 98.3890% ( 1) 00:12:40.081 3.366 - 3.390: 98.4045% ( 2) 00:12:40.081 3.390 - 3.413: 98.4200% ( 2) 00:12:40.081 3.413 - 3.437: 98.4277% ( 1) 00:12:40.081 3.437 - 3.461: 98.4354% ( 1) 00:12:40.081 3.461 - 3.484: 98.4664% ( 4) 00:12:40.081 3.508 - 3.532: 98.4742% ( 1) 00:12:40.081 3.532 - 3.556: 98.4974% ( 3) 00:12:40.081 3.603 - 3.627: 98.5052% ( 1) 00:12:40.081 3.627 - 3.650: 98.5361% ( 4) 00:12:40.081 3.650 - 3.674: 98.5439% ( 1) 00:12:40.081 3.745 - 3.769: 98.5594% ( 2) 00:12:40.081 3.793 - 3.816: 98.5671% ( 1) 00:12:40.081 3.816 - 3.840: 9[2024-11-20 07:14:43.135982] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:40.081 8.5749% ( 1) 00:12:40.081 3.935 - 3.959: 98.5826% ( 1) 00:12:40.081 3.982 - 4.006: 98.5903% ( 1) 00:12:40.081 4.196 - 4.219: 98.6058% ( 2) 00:12:40.081 5.310 - 5.333: 98.6136% ( 1) 00:12:40.081 5.381 - 5.404: 98.6213% ( 1) 00:12:40.081 5.404 - 5.428: 98.6291% ( 1) 00:12:40.081 5.428 - 5.452: 98.6368% ( 1) 00:12:40.081 5.547 - 5.570: 98.6446% ( 1) 00:12:40.081 5.641 - 5.665: 98.6523% ( 1) 00:12:40.081 5.689 - 5.713: 98.6601% ( 1) 00:12:40.081 5.879 - 5.902: 98.6678% ( 1) 00:12:40.081 5.950 - 5.973: 98.6755% ( 1) 00:12:40.081 5.997 - 6.021: 98.6988% ( 3) 00:12:40.081 6.044 - 6.068: 98.7065% ( 1) 00:12:40.081 6.258 - 6.305: 98.7143% ( 1) 00:12:40.081 6.447 - 6.495: 98.7220% ( 1) 00:12:40.081 6.542 - 6.590: 98.7298% ( 1) 00:12:40.081 6.590 - 6.637: 98.7375% ( 1) 00:12:40.081 6.827 - 6.874: 98.7453% ( 1) 00:12:40.081 7.111 - 7.159: 98.7530% ( 1) 00:12:40.081 7.301 - 7.348: 98.7607% ( 1) 00:12:40.081 7.964 - 8.012: 98.7685% ( 1) 00:12:40.081 8.865 - 8.913: 98.7762% ( 1) 00:12:40.081 10.287 - 10.335: 98.7840% ( 1) 00:12:40.081 15.265 - 15.360: 98.7917% ( 1) 00:12:40.081 15.455 - 15.550: 98.7995% ( 1) 00:12:40.081 15.550 - 15.644: 98.8072% ( 1) 00:12:40.081 15.739 - 15.834: 98.8305% ( 3) 00:12:40.081 15.834 - 15.929: 98.8924% ( 8) 00:12:40.081 15.929 - 16.024: 98.9079% ( 2) 00:12:40.081 16.024 - 16.119: 98.9311% ( 3) 00:12:40.081 16.119 - 16.213: 98.9621% ( 4) 00:12:40.081 16.213 - 16.308: 98.9854% ( 3) 00:12:40.081 16.308 - 16.403: 99.0163% ( 4) 00:12:40.081 16.403 - 16.498: 99.0473% ( 4) 00:12:40.081 16.498 - 16.593: 99.0861% ( 5) 00:12:40.081 16.593 - 16.687: 99.1170% ( 4) 00:12:40.081 16.687 - 16.782: 99.1558% ( 5) 00:12:40.081 16.782 - 16.877: 99.2332% ( 10) 00:12:40.081 16.877 - 16.972: 99.2642% ( 4) 00:12:40.081 16.972 - 17.067: 99.2719% ( 1) 00:12:40.081 17.067 - 17.161: 99.2874% ( 2) 00:12:40.081 17.161 - 17.256: 99.3029% ( 2) 00:12:40.081 17.256 - 17.351: 99.3184% ( 2) 00:12:40.081 17.541 - 17.636: 99.3339% ( 2) 00:12:40.081 17.636 - 17.730: 99.3494% ( 2) 00:12:40.081 17.920 - 18.015: 99.3571% ( 1) 00:12:40.081 18.394 - 18.489: 99.3649% ( 1) 00:12:40.081 3980.705 - 4004.978: 99.9458% ( 75) 00:12:40.081 4004.978 - 4029.250: 100.0000% ( 7) 00:12:40.081 00:12:40.081 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:40.082 [ 00:12:40.082 { 00:12:40.082 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:40.082 "subtype": "Discovery", 00:12:40.082 "listen_addresses": [], 00:12:40.082 "allow_any_host": true, 00:12:40.082 "hosts": [] 00:12:40.082 }, 00:12:40.082 { 00:12:40.082 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:40.082 "subtype": "NVMe", 00:12:40.082 "listen_addresses": [ 00:12:40.082 { 00:12:40.082 "trtype": "VFIOUSER", 00:12:40.082 "adrfam": "IPv4", 00:12:40.082 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:40.082 "trsvcid": "0" 00:12:40.082 } 00:12:40.082 ], 00:12:40.082 "allow_any_host": true, 00:12:40.082 "hosts": [], 00:12:40.082 "serial_number": "SPDK1", 00:12:40.082 "model_number": "SPDK bdev Controller", 00:12:40.082 "max_namespaces": 32, 00:12:40.082 "min_cntlid": 1, 00:12:40.082 "max_cntlid": 65519, 00:12:40.082 "namespaces": [ 00:12:40.082 { 00:12:40.082 "nsid": 1, 00:12:40.082 "bdev_name": "Malloc1", 00:12:40.082 "name": "Malloc1", 00:12:40.082 "nguid": "699BE2B6E14A42F79CCFFE72837B449F", 00:12:40.082 "uuid": "699be2b6-e14a-42f7-9ccf-fe72837b449f" 00:12:40.082 } 00:12:40.082 ] 00:12:40.082 }, 00:12:40.082 { 00:12:40.082 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:40.082 "subtype": "NVMe", 00:12:40.082 "listen_addresses": [ 00:12:40.082 { 00:12:40.082 "trtype": "VFIOUSER", 00:12:40.082 "adrfam": "IPv4", 00:12:40.082 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:40.082 "trsvcid": "0" 00:12:40.082 } 00:12:40.082 ], 00:12:40.082 "allow_any_host": true, 00:12:40.082 "hosts": [], 00:12:40.082 "serial_number": "SPDK2", 00:12:40.082 "model_number": "SPDK bdev Controller", 00:12:40.082 "max_namespaces": 32, 00:12:40.082 "min_cntlid": 1, 00:12:40.082 "max_cntlid": 65519, 00:12:40.082 "namespaces": [ 00:12:40.082 { 00:12:40.082 "nsid": 1, 00:12:40.082 "bdev_name": "Malloc2", 00:12:40.082 "name": "Malloc2", 00:12:40.082 "nguid": "62F965D6F30C4C72AD9FE4880339220D", 00:12:40.082 "uuid": "62f965d6-f30c-4c72-ad9f-e4880339220d" 00:12:40.082 } 00:12:40.082 ] 00:12:40.082 } 00:12:40.082 ] 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2477371 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:40.082 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:40.341 [2024-11-20 07:14:43.615919] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:40.341 Malloc3 00:12:40.341 07:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:40.600 [2024-11-20 07:14:44.017929] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:40.859 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:40.859 Asynchronous Event Request test 00:12:40.859 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.859 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.859 Registering asynchronous event callbacks... 00:12:40.859 Starting namespace attribute notice tests for all controllers... 00:12:40.859 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:40.859 aer_cb - Changed Namespace 00:12:40.859 Cleaning up... 00:12:41.140 [ 00:12:41.140 { 00:12:41.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:41.140 "subtype": "Discovery", 00:12:41.140 "listen_addresses": [], 00:12:41.140 "allow_any_host": true, 00:12:41.140 "hosts": [] 00:12:41.140 }, 00:12:41.140 { 00:12:41.140 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:41.140 "subtype": "NVMe", 00:12:41.140 "listen_addresses": [ 00:12:41.140 { 00:12:41.140 "trtype": "VFIOUSER", 00:12:41.140 "adrfam": "IPv4", 00:12:41.140 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:41.140 "trsvcid": "0" 00:12:41.140 } 00:12:41.140 ], 00:12:41.140 "allow_any_host": true, 00:12:41.140 "hosts": [], 00:12:41.140 "serial_number": "SPDK1", 00:12:41.140 "model_number": "SPDK bdev Controller", 00:12:41.140 "max_namespaces": 32, 00:12:41.140 "min_cntlid": 1, 00:12:41.140 "max_cntlid": 65519, 00:12:41.140 "namespaces": [ 00:12:41.140 { 00:12:41.140 "nsid": 1, 00:12:41.140 "bdev_name": "Malloc1", 00:12:41.140 "name": "Malloc1", 00:12:41.140 "nguid": "699BE2B6E14A42F79CCFFE72837B449F", 00:12:41.140 "uuid": "699be2b6-e14a-42f7-9ccf-fe72837b449f" 00:12:41.140 }, 00:12:41.140 { 00:12:41.140 "nsid": 2, 00:12:41.140 "bdev_name": "Malloc3", 00:12:41.140 "name": "Malloc3", 00:12:41.140 "nguid": "0034CCCA5B2C40BDA720CF277B4E432A", 00:12:41.140 "uuid": "0034ccca-5b2c-40bd-a720-cf277b4e432a" 00:12:41.140 } 00:12:41.140 ] 00:12:41.140 }, 00:12:41.140 { 00:12:41.140 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:41.140 "subtype": "NVMe", 00:12:41.140 "listen_addresses": [ 00:12:41.140 { 00:12:41.140 "trtype": "VFIOUSER", 00:12:41.140 "adrfam": "IPv4", 00:12:41.140 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:41.140 "trsvcid": "0" 00:12:41.140 } 00:12:41.140 ], 00:12:41.140 "allow_any_host": true, 00:12:41.140 "hosts": [], 00:12:41.140 "serial_number": "SPDK2", 00:12:41.140 "model_number": "SPDK bdev Controller", 00:12:41.140 "max_namespaces": 32, 00:12:41.140 "min_cntlid": 1, 00:12:41.140 "max_cntlid": 65519, 00:12:41.140 "namespaces": [ 00:12:41.140 { 00:12:41.140 "nsid": 1, 00:12:41.140 "bdev_name": "Malloc2", 00:12:41.140 "name": "Malloc2", 00:12:41.140 "nguid": "62F965D6F30C4C72AD9FE4880339220D", 00:12:41.140 "uuid": "62f965d6-f30c-4c72-ad9f-e4880339220d" 00:12:41.140 } 00:12:41.140 ] 00:12:41.140 } 00:12:41.140 ] 00:12:41.140 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2477371 00:12:41.140 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:41.140 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:41.140 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:41.140 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:41.140 [2024-11-20 07:14:44.322375] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:12:41.140 [2024-11-20 07:14:44.322413] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477509 ] 00:12:41.140 [2024-11-20 07:14:44.371157] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:41.140 [2024-11-20 07:14:44.379635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.140 [2024-11-20 07:14:44.379668] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc54dc99000 00:12:41.140 [2024-11-20 07:14:44.380634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.381635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.382662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.383655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.384658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.385663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.386670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.387679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.140 [2024-11-20 07:14:44.388692] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.140 [2024-11-20 07:14:44.388714] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc54dc8e000 00:12:41.140 [2024-11-20 07:14:44.389827] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.140 [2024-11-20 07:14:44.404991] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:41.140 [2024-11-20 07:14:44.405029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:41.140 [2024-11-20 07:14:44.410147] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:41.140 [2024-11-20 07:14:44.410201] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:41.140 [2024-11-20 07:14:44.410311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:41.140 [2024-11-20 07:14:44.410339] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:41.140 [2024-11-20 07:14:44.410365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:41.140 [2024-11-20 07:14:44.411168] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:41.140 [2024-11-20 07:14:44.411189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:41.140 [2024-11-20 07:14:44.411202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:41.141 [2024-11-20 07:14:44.412172] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:41.141 [2024-11-20 07:14:44.412192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:41.141 [2024-11-20 07:14:44.412206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:41.141 [2024-11-20 07:14:44.413178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:41.141 [2024-11-20 07:14:44.413199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:41.141 [2024-11-20 07:14:44.414180] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:41.141 [2024-11-20 07:14:44.414200] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:41.141 [2024-11-20 07:14:44.414208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:41.141 [2024-11-20 07:14:44.414220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:41.141 [2024-11-20 07:14:44.414330] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:41.141 [2024-11-20 07:14:44.414340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:41.141 [2024-11-20 07:14:44.414349] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:41.141 [2024-11-20 07:14:44.415184] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:41.141 [2024-11-20 07:14:44.416189] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:41.141 [2024-11-20 07:14:44.417201] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:41.141 [2024-11-20 07:14:44.418198] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.141 [2024-11-20 07:14:44.418279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:41.141 [2024-11-20 07:14:44.419209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:41.141 [2024-11-20 07:14:44.419229] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:41.141 [2024-11-20 07:14:44.419238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.419262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:41.141 [2024-11-20 07:14:44.419294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.419325] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.141 [2024-11-20 07:14:44.419336] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.141 [2024-11-20 07:14:44.419343] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.141 [2024-11-20 07:14:44.419361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.427318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.427342] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:41.141 [2024-11-20 07:14:44.427351] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:41.141 [2024-11-20 07:14:44.427358] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:41.141 [2024-11-20 07:14:44.427367] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:41.141 [2024-11-20 07:14:44.427379] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:41.141 [2024-11-20 07:14:44.427389] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:41.141 [2024-11-20 07:14:44.427397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.427414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.427431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.435314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.435339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.141 [2024-11-20 07:14:44.435353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.141 [2024-11-20 07:14:44.435365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.141 [2024-11-20 07:14:44.435382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.141 [2024-11-20 07:14:44.435392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.435405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.435419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.443315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.443339] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:41.141 [2024-11-20 07:14:44.443350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.443361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.443371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.443385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.451313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.451391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.451408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.451422] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:41.141 [2024-11-20 07:14:44.451430] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:41.141 [2024-11-20 07:14:44.451436] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.141 [2024-11-20 07:14:44.451446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.459326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.459351] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:41.141 [2024-11-20 07:14:44.459367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.459382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.459395] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.141 [2024-11-20 07:14:44.459404] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.141 [2024-11-20 07:14:44.459410] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.141 [2024-11-20 07:14:44.459419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.467330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.467360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.467376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.467390] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.141 [2024-11-20 07:14:44.467398] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.141 [2024-11-20 07:14:44.467404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.141 [2024-11-20 07:14:44.467413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.141 [2024-11-20 07:14:44.475316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:41.141 [2024-11-20 07:14:44.475338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.475351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.475365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.475376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:41.141 [2024-11-20 07:14:44.475385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:41.142 [2024-11-20 07:14:44.475393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:41.142 [2024-11-20 07:14:44.475402] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:41.142 [2024-11-20 07:14:44.475409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:41.142 [2024-11-20 07:14:44.475418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:41.142 [2024-11-20 07:14:44.475442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.483326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.483353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.491329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.491355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.499329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.499355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.507351] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:41.142 [2024-11-20 07:14:44.507363] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:41.142 [2024-11-20 07:14:44.507369] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:41.142 [2024-11-20 07:14:44.507375] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:41.142 [2024-11-20 07:14:44.507380] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:41.142 [2024-11-20 07:14:44.507390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:41.142 [2024-11-20 07:14:44.507402] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:41.142 [2024-11-20 07:14:44.507410] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:41.142 [2024-11-20 07:14:44.507415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.142 [2024-11-20 07:14:44.507424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.507435] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:41.142 [2024-11-20 07:14:44.507443] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.142 [2024-11-20 07:14:44.507448] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.142 [2024-11-20 07:14:44.507457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.507469] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:41.142 [2024-11-20 07:14:44.507477] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:41.142 [2024-11-20 07:14:44.507483] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:41.142 [2024-11-20 07:14:44.507491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:41.142 [2024-11-20 07:14:44.515329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.515357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.515375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:41.142 [2024-11-20 07:14:44.515388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:41.142 ===================================================== 00:12:41.142 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.142 ===================================================== 00:12:41.142 Controller Capabilities/Features 00:12:41.142 ================================ 00:12:41.142 Vendor ID: 4e58 00:12:41.142 Subsystem Vendor ID: 4e58 00:12:41.142 Serial Number: SPDK2 00:12:41.142 Model Number: SPDK bdev Controller 00:12:41.142 Firmware Version: 25.01 00:12:41.142 Recommended Arb Burst: 6 00:12:41.142 IEEE OUI Identifier: 8d 6b 50 00:12:41.142 Multi-path I/O 00:12:41.142 May have multiple subsystem ports: Yes 00:12:41.142 May have multiple controllers: Yes 00:12:41.142 Associated with SR-IOV VF: No 00:12:41.142 Max Data Transfer Size: 131072 00:12:41.142 Max Number of Namespaces: 32 00:12:41.142 Max Number of I/O Queues: 127 00:12:41.142 NVMe Specification Version (VS): 1.3 00:12:41.142 NVMe Specification Version (Identify): 1.3 00:12:41.142 Maximum Queue Entries: 256 00:12:41.142 Contiguous Queues Required: Yes 00:12:41.142 Arbitration Mechanisms Supported 00:12:41.142 Weighted Round Robin: Not Supported 00:12:41.142 Vendor Specific: Not Supported 00:12:41.142 Reset Timeout: 15000 ms 00:12:41.142 Doorbell Stride: 4 bytes 00:12:41.142 NVM Subsystem Reset: Not Supported 00:12:41.142 Command Sets Supported 00:12:41.142 NVM Command Set: Supported 00:12:41.142 Boot Partition: Not Supported 00:12:41.142 Memory Page Size Minimum: 4096 bytes 00:12:41.142 Memory Page Size Maximum: 4096 bytes 00:12:41.142 Persistent Memory Region: Not Supported 00:12:41.142 Optional Asynchronous Events Supported 00:12:41.142 Namespace Attribute Notices: Supported 00:12:41.142 Firmware Activation Notices: Not Supported 00:12:41.142 ANA Change Notices: Not Supported 00:12:41.142 PLE Aggregate Log Change Notices: Not Supported 00:12:41.142 LBA Status Info Alert Notices: Not Supported 00:12:41.142 EGE Aggregate Log Change Notices: Not Supported 00:12:41.142 Normal NVM Subsystem Shutdown event: Not Supported 00:12:41.142 Zone Descriptor Change Notices: Not Supported 00:12:41.142 Discovery Log Change Notices: Not Supported 00:12:41.142 Controller Attributes 00:12:41.142 128-bit Host Identifier: Supported 00:12:41.142 Non-Operational Permissive Mode: Not Supported 00:12:41.142 NVM Sets: Not Supported 00:12:41.142 Read Recovery Levels: Not Supported 00:12:41.142 Endurance Groups: Not Supported 00:12:41.142 Predictable Latency Mode: Not Supported 00:12:41.142 Traffic Based Keep ALive: Not Supported 00:12:41.142 Namespace Granularity: Not Supported 00:12:41.142 SQ Associations: Not Supported 00:12:41.142 UUID List: Not Supported 00:12:41.142 Multi-Domain Subsystem: Not Supported 00:12:41.142 Fixed Capacity Management: Not Supported 00:12:41.142 Variable Capacity Management: Not Supported 00:12:41.142 Delete Endurance Group: Not Supported 00:12:41.142 Delete NVM Set: Not Supported 00:12:41.142 Extended LBA Formats Supported: Not Supported 00:12:41.142 Flexible Data Placement Supported: Not Supported 00:12:41.142 00:12:41.142 Controller Memory Buffer Support 00:12:41.142 ================================ 00:12:41.142 Supported: No 00:12:41.142 00:12:41.142 Persistent Memory Region Support 00:12:41.142 ================================ 00:12:41.142 Supported: No 00:12:41.142 00:12:41.142 Admin Command Set Attributes 00:12:41.142 ============================ 00:12:41.142 Security Send/Receive: Not Supported 00:12:41.142 Format NVM: Not Supported 00:12:41.142 Firmware Activate/Download: Not Supported 00:12:41.142 Namespace Management: Not Supported 00:12:41.142 Device Self-Test: Not Supported 00:12:41.142 Directives: Not Supported 00:12:41.142 NVMe-MI: Not Supported 00:12:41.142 Virtualization Management: Not Supported 00:12:41.142 Doorbell Buffer Config: Not Supported 00:12:41.142 Get LBA Status Capability: Not Supported 00:12:41.142 Command & Feature Lockdown Capability: Not Supported 00:12:41.143 Abort Command Limit: 4 00:12:41.143 Async Event Request Limit: 4 00:12:41.143 Number of Firmware Slots: N/A 00:12:41.143 Firmware Slot 1 Read-Only: N/A 00:12:41.143 Firmware Activation Without Reset: N/A 00:12:41.143 Multiple Update Detection Support: N/A 00:12:41.143 Firmware Update Granularity: No Information Provided 00:12:41.143 Per-Namespace SMART Log: No 00:12:41.143 Asymmetric Namespace Access Log Page: Not Supported 00:12:41.143 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:41.143 Command Effects Log Page: Supported 00:12:41.143 Get Log Page Extended Data: Supported 00:12:41.143 Telemetry Log Pages: Not Supported 00:12:41.143 Persistent Event Log Pages: Not Supported 00:12:41.143 Supported Log Pages Log Page: May Support 00:12:41.143 Commands Supported & Effects Log Page: Not Supported 00:12:41.143 Feature Identifiers & Effects Log Page:May Support 00:12:41.143 NVMe-MI Commands & Effects Log Page: May Support 00:12:41.143 Data Area 4 for Telemetry Log: Not Supported 00:12:41.143 Error Log Page Entries Supported: 128 00:12:41.143 Keep Alive: Supported 00:12:41.143 Keep Alive Granularity: 10000 ms 00:12:41.143 00:12:41.143 NVM Command Set Attributes 00:12:41.143 ========================== 00:12:41.143 Submission Queue Entry Size 00:12:41.143 Max: 64 00:12:41.143 Min: 64 00:12:41.143 Completion Queue Entry Size 00:12:41.143 Max: 16 00:12:41.143 Min: 16 00:12:41.143 Number of Namespaces: 32 00:12:41.143 Compare Command: Supported 00:12:41.143 Write Uncorrectable Command: Not Supported 00:12:41.143 Dataset Management Command: Supported 00:12:41.143 Write Zeroes Command: Supported 00:12:41.143 Set Features Save Field: Not Supported 00:12:41.143 Reservations: Not Supported 00:12:41.143 Timestamp: Not Supported 00:12:41.143 Copy: Supported 00:12:41.143 Volatile Write Cache: Present 00:12:41.143 Atomic Write Unit (Normal): 1 00:12:41.143 Atomic Write Unit (PFail): 1 00:12:41.143 Atomic Compare & Write Unit: 1 00:12:41.143 Fused Compare & Write: Supported 00:12:41.143 Scatter-Gather List 00:12:41.143 SGL Command Set: Supported (Dword aligned) 00:12:41.143 SGL Keyed: Not Supported 00:12:41.143 SGL Bit Bucket Descriptor: Not Supported 00:12:41.143 SGL Metadata Pointer: Not Supported 00:12:41.143 Oversized SGL: Not Supported 00:12:41.143 SGL Metadata Address: Not Supported 00:12:41.143 SGL Offset: Not Supported 00:12:41.143 Transport SGL Data Block: Not Supported 00:12:41.143 Replay Protected Memory Block: Not Supported 00:12:41.143 00:12:41.143 Firmware Slot Information 00:12:41.143 ========================= 00:12:41.143 Active slot: 1 00:12:41.143 Slot 1 Firmware Revision: 25.01 00:12:41.143 00:12:41.143 00:12:41.143 Commands Supported and Effects 00:12:41.143 ============================== 00:12:41.143 Admin Commands 00:12:41.143 -------------- 00:12:41.143 Get Log Page (02h): Supported 00:12:41.143 Identify (06h): Supported 00:12:41.143 Abort (08h): Supported 00:12:41.143 Set Features (09h): Supported 00:12:41.143 Get Features (0Ah): Supported 00:12:41.143 Asynchronous Event Request (0Ch): Supported 00:12:41.143 Keep Alive (18h): Supported 00:12:41.143 I/O Commands 00:12:41.143 ------------ 00:12:41.143 Flush (00h): Supported LBA-Change 00:12:41.143 Write (01h): Supported LBA-Change 00:12:41.143 Read (02h): Supported 00:12:41.143 Compare (05h): Supported 00:12:41.143 Write Zeroes (08h): Supported LBA-Change 00:12:41.143 Dataset Management (09h): Supported LBA-Change 00:12:41.143 Copy (19h): Supported LBA-Change 00:12:41.143 00:12:41.143 Error Log 00:12:41.143 ========= 00:12:41.143 00:12:41.143 Arbitration 00:12:41.143 =========== 00:12:41.143 Arbitration Burst: 1 00:12:41.143 00:12:41.143 Power Management 00:12:41.143 ================ 00:12:41.143 Number of Power States: 1 00:12:41.143 Current Power State: Power State #0 00:12:41.143 Power State #0: 00:12:41.143 Max Power: 0.00 W 00:12:41.143 Non-Operational State: Operational 00:12:41.143 Entry Latency: Not Reported 00:12:41.143 Exit Latency: Not Reported 00:12:41.143 Relative Read Throughput: 0 00:12:41.143 Relative Read Latency: 0 00:12:41.143 Relative Write Throughput: 0 00:12:41.143 Relative Write Latency: 0 00:12:41.143 Idle Power: Not Reported 00:12:41.143 Active Power: Not Reported 00:12:41.143 Non-Operational Permissive Mode: Not Supported 00:12:41.143 00:12:41.143 Health Information 00:12:41.143 ================== 00:12:41.143 Critical Warnings: 00:12:41.143 Available Spare Space: OK 00:12:41.143 Temperature: OK 00:12:41.143 Device Reliability: OK 00:12:41.143 Read Only: No 00:12:41.143 Volatile Memory Backup: OK 00:12:41.143 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:41.143 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:41.143 Available Spare: 0% 00:12:41.143 Available Sp[2024-11-20 07:14:44.515512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:41.143 [2024-11-20 07:14:44.523316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:41.143 [2024-11-20 07:14:44.523366] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:41.143 [2024-11-20 07:14:44.523384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.143 [2024-11-20 07:14:44.523395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.143 [2024-11-20 07:14:44.523404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.143 [2024-11-20 07:14:44.523414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.143 [2024-11-20 07:14:44.523504] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:41.143 [2024-11-20 07:14:44.523526] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:41.143 [2024-11-20 07:14:44.524508] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.143 [2024-11-20 07:14:44.524579] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:41.143 [2024-11-20 07:14:44.524593] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:41.143 [2024-11-20 07:14:44.525527] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:41.143 [2024-11-20 07:14:44.525552] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:41.143 [2024-11-20 07:14:44.525620] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:41.143 [2024-11-20 07:14:44.526836] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.143 are Threshold: 0% 00:12:41.143 Life Percentage Used: 0% 00:12:41.143 Data Units Read: 0 00:12:41.143 Data Units Written: 0 00:12:41.143 Host Read Commands: 0 00:12:41.143 Host Write Commands: 0 00:12:41.143 Controller Busy Time: 0 minutes 00:12:41.143 Power Cycles: 0 00:12:41.143 Power On Hours: 0 hours 00:12:41.143 Unsafe Shutdowns: 0 00:12:41.143 Unrecoverable Media Errors: 0 00:12:41.143 Lifetime Error Log Entries: 0 00:12:41.143 Warning Temperature Time: 0 minutes 00:12:41.143 Critical Temperature Time: 0 minutes 00:12:41.143 00:12:41.143 Number of Queues 00:12:41.143 ================ 00:12:41.143 Number of I/O Submission Queues: 127 00:12:41.143 Number of I/O Completion Queues: 127 00:12:41.143 00:12:41.143 Active Namespaces 00:12:41.143 ================= 00:12:41.143 Namespace ID:1 00:12:41.143 Error Recovery Timeout: Unlimited 00:12:41.143 Command Set Identifier: NVM (00h) 00:12:41.143 Deallocate: Supported 00:12:41.143 Deallocated/Unwritten Error: Not Supported 00:12:41.143 Deallocated Read Value: Unknown 00:12:41.143 Deallocate in Write Zeroes: Not Supported 00:12:41.143 Deallocated Guard Field: 0xFFFF 00:12:41.143 Flush: Supported 00:12:41.143 Reservation: Supported 00:12:41.143 Namespace Sharing Capabilities: Multiple Controllers 00:12:41.143 Size (in LBAs): 131072 (0GiB) 00:12:41.143 Capacity (in LBAs): 131072 (0GiB) 00:12:41.143 Utilization (in LBAs): 131072 (0GiB) 00:12:41.143 NGUID: 62F965D6F30C4C72AD9FE4880339220D 00:12:41.144 UUID: 62f965d6-f30c-4c72-ad9f-e4880339220d 00:12:41.144 Thin Provisioning: Not Supported 00:12:41.144 Per-NS Atomic Units: Yes 00:12:41.144 Atomic Boundary Size (Normal): 0 00:12:41.144 Atomic Boundary Size (PFail): 0 00:12:41.144 Atomic Boundary Offset: 0 00:12:41.144 Maximum Single Source Range Length: 65535 00:12:41.144 Maximum Copy Length: 65535 00:12:41.144 Maximum Source Range Count: 1 00:12:41.144 NGUID/EUI64 Never Reused: No 00:12:41.144 Namespace Write Protected: No 00:12:41.144 Number of LBA Formats: 1 00:12:41.144 Current LBA Format: LBA Format #00 00:12:41.144 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:41.144 00:12:41.402 07:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:41.402 [2024-11-20 07:14:44.779267] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.675 Initializing NVMe Controllers 00:12:46.675 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.675 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:46.675 Initialization complete. Launching workers. 00:12:46.675 ======================================================== 00:12:46.675 Latency(us) 00:12:46.675 Device Information : IOPS MiB/s Average min max 00:12:46.675 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33417.34 130.54 3829.55 1176.70 7691.64 00:12:46.675 ======================================================== 00:12:46.675 Total : 33417.34 130.54 3829.55 1176.70 7691.64 00:12:46.675 00:12:46.675 [2024-11-20 07:14:49.888671] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.675 07:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:46.967 [2024-11-20 07:14:50.149474] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.270 Initializing NVMe Controllers 00:12:52.270 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.270 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:52.270 Initialization complete. Launching workers. 00:12:52.270 ======================================================== 00:12:52.270 Latency(us) 00:12:52.270 Device Information : IOPS MiB/s Average min max 00:12:52.270 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30883.44 120.64 4144.02 1228.37 8357.05 00:12:52.270 ======================================================== 00:12:52.270 Total : 30883.44 120.64 4144.02 1228.37 8357.05 00:12:52.270 00:12:52.270 [2024-11-20 07:14:55.174090] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.270 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:52.270 [2024-11-20 07:14:55.403131] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.551 [2024-11-20 07:15:00.532456] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.551 Initializing NVMe Controllers 00:12:57.551 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.551 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:57.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:57.551 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:57.551 Initialization complete. Launching workers. 00:12:57.551 Starting thread on core 2 00:12:57.551 Starting thread on core 3 00:12:57.551 Starting thread on core 1 00:12:57.551 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:57.551 [2024-11-20 07:15:00.867821] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.854 [2024-11-20 07:15:03.926869] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.854 Initializing NVMe Controllers 00:13:00.854 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.854 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.854 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:00.854 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:00.854 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:00.854 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:00.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:00.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:00.854 Initialization complete. Launching workers. 00:13:00.854 Starting thread on core 1 with urgent priority queue 00:13:00.854 Starting thread on core 2 with urgent priority queue 00:13:00.854 Starting thread on core 3 with urgent priority queue 00:13:00.854 Starting thread on core 0 with urgent priority queue 00:13:00.854 SPDK bdev Controller (SPDK2 ) core 0: 5369.33 IO/s 18.62 secs/100000 ios 00:13:00.854 SPDK bdev Controller (SPDK2 ) core 1: 5096.00 IO/s 19.62 secs/100000 ios 00:13:00.854 SPDK bdev Controller (SPDK2 ) core 2: 5266.67 IO/s 18.99 secs/100000 ios 00:13:00.854 SPDK bdev Controller (SPDK2 ) core 3: 5148.00 IO/s 19.43 secs/100000 ios 00:13:00.854 ======================================================== 00:13:00.854 00:13:00.854 07:15:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:00.854 [2024-11-20 07:15:04.251525] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.854 Initializing NVMe Controllers 00:13:00.854 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.854 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:00.854 Namespace ID: 1 size: 0GB 00:13:00.854 Initialization complete. 00:13:00.854 INFO: using host memory buffer for IO 00:13:00.854 Hello world! 00:13:00.854 [2024-11-20 07:15:04.265627] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:01.113 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:01.371 [2024-11-20 07:15:04.576358] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.308 Initializing NVMe Controllers 00:13:02.308 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.308 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.308 Initialization complete. Launching workers. 00:13:02.308 submit (in ns) avg, min, max = 8814.6, 3697.8, 4016744.4 00:13:02.308 complete (in ns) avg, min, max = 24205.5, 2127.8, 4015937.8 00:13:02.308 00:13:02.308 Submit histogram 00:13:02.308 ================ 00:13:02.308 Range in us Cumulative Count 00:13:02.308 3.698 - 3.721: 0.5401% ( 69) 00:13:02.308 3.721 - 3.745: 2.7473% ( 282) 00:13:02.308 3.745 - 3.769: 8.1637% ( 692) 00:13:02.308 3.769 - 3.793: 16.0927% ( 1013) 00:13:02.308 3.793 - 3.816: 26.4011% ( 1317) 00:13:02.308 3.816 - 3.840: 34.7683% ( 1069) 00:13:02.308 3.840 - 3.864: 41.8989% ( 911) 00:13:02.308 3.864 - 3.887: 47.6832% ( 739) 00:13:02.308 3.887 - 3.911: 54.0780% ( 817) 00:13:02.308 3.911 - 3.935: 60.2145% ( 784) 00:13:02.308 3.935 - 3.959: 65.6700% ( 697) 00:13:02.308 3.959 - 3.982: 70.6168% ( 632) 00:13:02.308 3.982 - 4.006: 74.8121% ( 536) 00:13:02.308 4.006 - 4.030: 78.7101% ( 498) 00:13:02.308 4.030 - 4.053: 82.2480% ( 452) 00:13:02.308 4.053 - 4.077: 84.9092% ( 340) 00:13:02.308 4.077 - 4.101: 86.7173% ( 231) 00:13:02.308 4.101 - 4.124: 88.2279% ( 193) 00:13:02.308 4.124 - 4.148: 90.0673% ( 235) 00:13:02.308 4.148 - 4.172: 91.6797% ( 206) 00:13:02.308 4.172 - 4.196: 93.1904% ( 193) 00:13:02.308 4.196 - 4.219: 94.4818% ( 165) 00:13:02.308 4.219 - 4.243: 95.2567% ( 99) 00:13:02.308 4.243 - 4.267: 95.7733% ( 66) 00:13:02.308 4.267 - 4.290: 96.1490% ( 48) 00:13:02.308 4.290 - 4.314: 96.2899% ( 18) 00:13:02.308 4.314 - 4.338: 96.4465% ( 20) 00:13:02.308 4.338 - 4.361: 96.5247% ( 10) 00:13:02.308 4.361 - 4.385: 96.6187% ( 12) 00:13:02.308 4.385 - 4.409: 96.7126% ( 12) 00:13:02.308 4.409 - 4.433: 96.7987% ( 11) 00:13:02.308 4.433 - 4.456: 96.8848% ( 11) 00:13:02.308 4.456 - 4.480: 96.9396% ( 7) 00:13:02.308 4.480 - 4.504: 96.9944% ( 7) 00:13:02.308 4.504 - 4.527: 97.0413% ( 6) 00:13:02.308 4.527 - 4.551: 97.0492% ( 1) 00:13:02.308 4.551 - 4.575: 97.0726% ( 3) 00:13:02.308 4.575 - 4.599: 97.0883% ( 2) 00:13:02.308 4.622 - 4.646: 97.0961% ( 1) 00:13:02.308 4.646 - 4.670: 97.1039% ( 1) 00:13:02.308 4.693 - 4.717: 97.1196% ( 2) 00:13:02.308 4.717 - 4.741: 97.1353% ( 2) 00:13:02.308 4.788 - 4.812: 97.1431% ( 1) 00:13:02.308 4.836 - 4.859: 97.1509% ( 1) 00:13:02.308 4.883 - 4.907: 97.1587% ( 1) 00:13:02.308 4.907 - 4.930: 97.1822% ( 3) 00:13:02.308 4.930 - 4.954: 97.2214% ( 5) 00:13:02.309 4.954 - 4.978: 97.2370% ( 2) 00:13:02.309 4.978 - 5.001: 97.2605% ( 3) 00:13:02.309 5.001 - 5.025: 97.2918% ( 4) 00:13:02.309 5.025 - 5.049: 97.3075% ( 2) 00:13:02.309 5.049 - 5.073: 97.3857% ( 10) 00:13:02.309 5.073 - 5.096: 97.4170% ( 4) 00:13:02.309 5.096 - 5.120: 97.4718% ( 7) 00:13:02.309 5.120 - 5.144: 97.5344% ( 8) 00:13:02.309 5.144 - 5.167: 97.6049% ( 9) 00:13:02.309 5.167 - 5.191: 97.6362% ( 4) 00:13:02.309 5.191 - 5.215: 97.6675% ( 4) 00:13:02.309 5.215 - 5.239: 97.7223% ( 7) 00:13:02.309 5.239 - 5.262: 97.7536% ( 4) 00:13:02.309 5.262 - 5.286: 97.7693% ( 2) 00:13:02.309 5.286 - 5.310: 97.8006% ( 4) 00:13:02.309 5.310 - 5.333: 97.8319% ( 4) 00:13:02.309 5.333 - 5.357: 97.8475% ( 2) 00:13:02.309 5.381 - 5.404: 97.8788% ( 4) 00:13:02.309 5.428 - 5.452: 97.8945% ( 2) 00:13:02.309 5.452 - 5.476: 97.9101% ( 2) 00:13:02.309 5.476 - 5.499: 97.9258% ( 2) 00:13:02.309 5.499 - 5.523: 97.9336% ( 1) 00:13:02.309 5.523 - 5.547: 97.9415% ( 1) 00:13:02.309 5.547 - 5.570: 97.9571% ( 2) 00:13:02.309 5.570 - 5.594: 97.9649% ( 1) 00:13:02.309 5.641 - 5.665: 97.9728% ( 1) 00:13:02.309 5.689 - 5.713: 97.9884% ( 2) 00:13:02.309 5.736 - 5.760: 97.9962% ( 1) 00:13:02.309 5.784 - 5.807: 98.0041% ( 1) 00:13:02.309 5.807 - 5.831: 98.0119% ( 1) 00:13:02.309 5.902 - 5.926: 98.0197% ( 1) 00:13:02.309 5.926 - 5.950: 98.0276% ( 1) 00:13:02.309 5.973 - 5.997: 98.0354% ( 1) 00:13:02.309 6.021 - 6.044: 98.0510% ( 2) 00:13:02.309 6.044 - 6.068: 98.0589% ( 1) 00:13:02.309 6.068 - 6.116: 98.0667% ( 1) 00:13:02.309 6.116 - 6.163: 98.0745% ( 1) 00:13:02.309 6.210 - 6.258: 98.0902% ( 2) 00:13:02.309 6.495 - 6.542: 98.1137% ( 3) 00:13:02.309 6.590 - 6.637: 98.1293% ( 2) 00:13:02.309 6.732 - 6.779: 98.1371% ( 1) 00:13:02.309 6.969 - 7.016: 98.1450% ( 1) 00:13:02.309 7.016 - 7.064: 98.1528% ( 1) 00:13:02.309 7.064 - 7.111: 98.1606% ( 1) 00:13:02.309 7.111 - 7.159: 98.1841% ( 3) 00:13:02.309 7.159 - 7.206: 98.1919% ( 1) 00:13:02.309 7.206 - 7.253: 98.2076% ( 2) 00:13:02.309 7.301 - 7.348: 98.2232% ( 2) 00:13:02.309 7.348 - 7.396: 98.2389% ( 2) 00:13:02.309 7.396 - 7.443: 98.2467% ( 1) 00:13:02.309 7.443 - 7.490: 98.2545% ( 1) 00:13:02.309 7.538 - 7.585: 98.2624% ( 1) 00:13:02.309 7.585 - 7.633: 98.2780% ( 2) 00:13:02.309 7.633 - 7.680: 98.3093% ( 4) 00:13:02.309 7.680 - 7.727: 98.3250% ( 2) 00:13:02.309 7.727 - 7.775: 98.3406% ( 2) 00:13:02.309 7.775 - 7.822: 98.3485% ( 1) 00:13:02.309 7.870 - 7.917: 98.3563% ( 1) 00:13:02.309 7.917 - 7.964: 98.3719% ( 2) 00:13:02.309 7.964 - 8.012: 98.3798% ( 1) 00:13:02.309 8.107 - 8.154: 98.3876% ( 1) 00:13:02.309 8.154 - 8.201: 98.3954% ( 1) 00:13:02.309 8.201 - 8.249: 98.4111% ( 2) 00:13:02.309 8.296 - 8.344: 98.4189% ( 1) 00:13:02.309 8.439 - 8.486: 98.4346% ( 2) 00:13:02.309 8.486 - 8.533: 98.4424% ( 1) 00:13:02.309 8.533 - 8.581: 98.4580% ( 2) 00:13:02.309 8.581 - 8.628: 98.4659% ( 1) 00:13:02.309 8.628 - 8.676: 98.4815% ( 2) 00:13:02.309 8.723 - 8.770: 98.4894% ( 1) 00:13:02.309 8.770 - 8.818: 98.5050% ( 2) 00:13:02.309 8.818 - 8.865: 98.5128% ( 1) 00:13:02.309 8.913 - 8.960: 98.5207% ( 1) 00:13:02.309 9.150 - 9.197: 98.5285% ( 1) 00:13:02.309 9.197 - 9.244: 98.5363% ( 1) 00:13:02.309 9.244 - 9.292: 98.5441% ( 1) 00:13:02.309 9.292 - 9.339: 98.5520% ( 1) 00:13:02.309 9.434 - 9.481: 98.5598% ( 1) 00:13:02.309 9.481 - 9.529: 98.5755% ( 2) 00:13:02.309 9.624 - 9.671: 98.5833% ( 1) 00:13:02.309 9.671 - 9.719: 98.5911% ( 1) 00:13:02.309 10.003 - 10.050: 98.6068% ( 2) 00:13:02.309 10.145 - 10.193: 98.6146% ( 1) 00:13:02.309 10.240 - 10.287: 98.6224% ( 1) 00:13:02.309 10.335 - 10.382: 98.6459% ( 3) 00:13:02.309 10.382 - 10.430: 98.6537% ( 1) 00:13:02.309 10.477 - 10.524: 98.6616% ( 1) 00:13:02.309 10.572 - 10.619: 98.6694% ( 1) 00:13:02.309 10.619 - 10.667: 98.6850% ( 2) 00:13:02.309 10.856 - 10.904: 98.7007% ( 2) 00:13:02.309 11.330 - 11.378: 98.7085% ( 1) 00:13:02.309 11.378 - 11.425: 98.7163% ( 1) 00:13:02.309 11.425 - 11.473: 98.7320% ( 2) 00:13:02.309 11.473 - 11.520: 98.7398% ( 1) 00:13:02.309 11.662 - 11.710: 98.7477% ( 1) 00:13:02.309 11.710 - 11.757: 98.7555% ( 1) 00:13:02.309 11.852 - 11.899: 98.7633% ( 1) 00:13:02.309 11.899 - 11.947: 98.7711% ( 1) 00:13:02.309 12.041 - 12.089: 98.7790% ( 1) 00:13:02.309 12.089 - 12.136: 98.7868% ( 1) 00:13:02.309 12.326 - 12.421: 98.7946% ( 1) 00:13:02.309 12.516 - 12.610: 98.8024% ( 1) 00:13:02.309 12.610 - 12.705: 98.8103% ( 1) 00:13:02.309 12.705 - 12.800: 98.8181% ( 1) 00:13:02.309 12.800 - 12.895: 98.8259% ( 1) 00:13:02.309 12.990 - 13.084: 98.8338% ( 1) 00:13:02.309 13.084 - 13.179: 98.8416% ( 1) 00:13:02.309 13.369 - 13.464: 98.8572% ( 2) 00:13:02.309 13.464 - 13.559: 98.8729% ( 2) 00:13:02.309 13.559 - 13.653: 98.8807% ( 1) 00:13:02.309 13.653 - 13.748: 98.8885% ( 1) 00:13:02.309 13.748 - 13.843: 98.9042% ( 2) 00:13:02.309 13.843 - 13.938: 98.9120% ( 1) 00:13:02.309 14.222 - 14.317: 98.9198% ( 1) 00:13:02.309 14.317 - 14.412: 98.9355% ( 2) 00:13:02.309 14.601 - 14.696: 98.9433% ( 1) 00:13:02.309 14.981 - 15.076: 98.9668% ( 3) 00:13:02.309 15.265 - 15.360: 98.9825% ( 2) 00:13:02.309 17.161 - 17.256: 98.9903% ( 1) 00:13:02.309 17.351 - 17.446: 98.9981% ( 1) 00:13:02.309 17.636 - 17.730: 99.0138% ( 2) 00:13:02.309 17.825 - 17.920: 99.0451% ( 4) 00:13:02.309 17.920 - 18.015: 99.0686% ( 3) 00:13:02.309 18.015 - 18.110: 99.0999% ( 4) 00:13:02.309 18.110 - 18.204: 99.1703% ( 9) 00:13:02.309 18.204 - 18.299: 99.2173% ( 6) 00:13:02.309 18.299 - 18.394: 99.2721% ( 7) 00:13:02.309 18.394 - 18.489: 99.3112% ( 5) 00:13:02.309 18.489 - 18.584: 99.3973% ( 11) 00:13:02.309 18.584 - 18.679: 99.4521% ( 7) 00:13:02.309 18.679 - 18.773: 99.5225% ( 9) 00:13:02.309 18.773 - 18.868: 99.5695% ( 6) 00:13:02.309 18.868 - 18.963: 99.6165% ( 6) 00:13:02.309 18.963 - 19.058: 99.6634% ( 6) 00:13:02.309 19.058 - 19.153: 99.7182% ( 7) 00:13:02.310 19.153 - 19.247: 99.7730% ( 7) 00:13:02.310 19.247 - 19.342: 99.7887% ( 2) 00:13:02.310 19.342 - 19.437: 99.7965% ( 1) 00:13:02.310 19.437 - 19.532: 99.8043% ( 1) 00:13:02.310 19.532 - 19.627: 99.8121% ( 1) 00:13:02.310 19.627 - 19.721: 99.8200% ( 1) 00:13:02.310 19.721 - 19.816: 99.8278% ( 1) 00:13:02.310 19.816 - 19.911: 99.8356% ( 1) 00:13:02.310 20.101 - 20.196: 99.8435% ( 1) 00:13:02.310 21.428 - 21.523: 99.8513% ( 1) 00:13:02.310 22.945 - 23.040: 99.8591% ( 1) 00:13:02.310 23.514 - 23.609: 99.8669% ( 1) 00:13:02.310 23.609 - 23.704: 99.8748% ( 1) 00:13:02.310 29.393 - 29.582: 99.8826% ( 1) 00:13:02.310 3980.705 - 4004.978: 99.9843% ( 13) 00:13:02.310 4004.978 - 4029.250: 100.0000% ( 2) 00:13:02.310 00:13:02.310 Complete histogram 00:13:02.310 ================== 00:13:02.310 Range in us Cumulative Count 00:13:02.310 2.121 - 2.133: 0.3444% ( 44) 00:13:02.310 2.133 - 2.145: 20.4524% ( 2569) 00:13:02.310 2.145 - 2.157: 40.9361% ( 2617) 00:13:02.310 2.157 - 2.169: 42.5250% ( 203) 00:13:02.310 2.169 - 2.181: 51.1584% ( 1103) 00:13:02.310 2.181 - 2.193: 56.0269% ( 622) 00:13:02.310 2.193 - 2.204: 57.4045% ( 176) 00:13:02.310 2.204 - 2.216: 68.8322% ( 1460) 00:13:02.310 2.216 - 2.228: 80.6512% ( 1510) 00:13:02.310 2.228 - 2.240: 81.7157% ( 136) 00:13:02.310 2.240 - 2.252: 85.9894% ( 546) 00:13:02.310 2.252 - 2.264: 89.9029% ( 500) 00:13:02.310 2.264 - 2.276: 90.6935% ( 101) 00:13:02.310 2.276 - 2.287: 91.8989% ( 154) 00:13:02.310 2.287 - 2.299: 92.9634% ( 136) 00:13:02.310 2.299 - 2.311: 94.5523% ( 203) 00:13:02.310 2.311 - 2.323: 95.3350% ( 100) 00:13:02.310 2.323 - 2.335: 95.4289% ( 12) 00:13:02.310 2.335 - 2.347: 95.5150% ( 11) 00:13:02.310 2.347 - 2.359: 95.6246% ( 14) 00:13:02.310 2.359 - 2.370: 95.7264% ( 13) 00:13:02.310 2.370 - 2.382: 95.9142% ( 24) 00:13:02.310 2.382 - 2.394: 96.2195% ( 39) 00:13:02.310 2.394 - 2.406: 96.3134% ( 12) 00:13:02.310 2.406 - 2.418: 96.3291% ( 2) 00:13:02.310 2.418 - 2.430: 96.3995% ( 9) 00:13:02.310 2.430 - 2.441: 96.5560% ( 20) 00:13:02.310 2.441 - 2.453: 96.6656% ( 14) 00:13:02.310 2.453 - 2.465: 96.9709% ( 39) 00:13:02.310 2.465 - 2.477: 97.2214% ( 32) 00:13:02.310 2.477 - 2.489: 97.4405% ( 28) 00:13:02.310 2.489 - 2.501: 97.6362% ( 25) 00:13:02.310 2.501 - 2.513: 97.7927% ( 20) 00:13:02.310 2.513 - 2.524: 97.9571% ( 21) 00:13:02.310 2.524 - 2.536: 98.0745% ( 15) 00:13:02.310 2.536 - 2.548: 98.1528% ( 10) 00:13:02.310 2.548 - 2.560: 98.2076% ( 7) 00:13:02.310 2.560 - 2.572: 98.2232% ( 2) 00:13:02.310 2.572 - 2.584: 98.2702% ( 6) 00:13:02.310 2.584 - 2.596: 98.3093% ( 5) 00:13:02.310 2.596 - 2.607: 98.3406% ( 4) 00:13:02.310 2.607 - 2.619: 98.3641% ( 3) 00:13:02.310 2.619 - 2.631: 98.3798% ( 2) 00:13:02.310 2.643 - 2.655: 98.4033% ( 3) 00:13:02.310 2.655 - 2.667: 98.4189% ( 2) 00:13:02.310 2.667 - 2.679: 98.4267% ( 1) 00:13:02.310 2.726 - 2.738: 98.4346% ( 1) 00:13:02.310 2.738 - 2.750: 98.4502% ( 2) 00:13:02.310 2.773 - 2.785: 98.4580% ( 1) 00:13:02.310 2.785 - 2.797: 98.4659% ( 1) 00:13:02.310 2.809 - 2.821: 98.4737% ( 1) 00:13:02.310 2.856 - 2.868: 98.4815% ( 1) 00:13:02.310 3.224 - 3.247: 98.4894% ( 1) 00:13:02.310 3.295 - 3.319: 9[2024-11-20 07:15:05.674139] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.310 8.4972% ( 1) 00:13:02.310 3.508 - 3.532: 98.5050% ( 1) 00:13:02.310 3.556 - 3.579: 98.5128% ( 1) 00:13:02.310 3.603 - 3.627: 98.5285% ( 2) 00:13:02.310 3.627 - 3.650: 98.5363% ( 1) 00:13:02.310 3.674 - 3.698: 98.5520% ( 2) 00:13:02.310 3.698 - 3.721: 98.5598% ( 1) 00:13:02.310 3.721 - 3.745: 98.5755% ( 2) 00:13:02.310 3.745 - 3.769: 98.5833% ( 1) 00:13:02.310 3.769 - 3.793: 98.6068% ( 3) 00:13:02.310 3.816 - 3.840: 98.6224% ( 2) 00:13:02.310 3.840 - 3.864: 98.6302% ( 1) 00:13:02.310 3.864 - 3.887: 98.6459% ( 2) 00:13:02.310 3.911 - 3.935: 98.6616% ( 2) 00:13:02.310 3.935 - 3.959: 98.6694% ( 1) 00:13:02.310 4.077 - 4.101: 98.6772% ( 1) 00:13:02.310 4.101 - 4.124: 98.6850% ( 1) 00:13:02.310 4.148 - 4.172: 98.6929% ( 1) 00:13:02.310 4.172 - 4.196: 98.7085% ( 2) 00:13:02.310 4.219 - 4.243: 98.7163% ( 1) 00:13:02.310 4.267 - 4.290: 98.7242% ( 1) 00:13:02.310 4.314 - 4.338: 98.7320% ( 1) 00:13:02.310 4.409 - 4.433: 98.7398% ( 1) 00:13:02.310 4.836 - 4.859: 98.7477% ( 1) 00:13:02.310 5.333 - 5.357: 98.7555% ( 1) 00:13:02.310 5.547 - 5.570: 98.7633% ( 1) 00:13:02.310 5.570 - 5.594: 98.7711% ( 1) 00:13:02.310 5.665 - 5.689: 98.7790% ( 1) 00:13:02.310 5.689 - 5.713: 98.7868% ( 1) 00:13:02.310 5.831 - 5.855: 98.7946% ( 1) 00:13:02.310 6.353 - 6.400: 98.8024% ( 1) 00:13:02.310 6.495 - 6.542: 98.8103% ( 1) 00:13:02.310 6.590 - 6.637: 98.8181% ( 1) 00:13:02.310 6.684 - 6.732: 98.8338% ( 2) 00:13:02.310 6.732 - 6.779: 98.8494% ( 2) 00:13:02.310 6.969 - 7.016: 98.8572% ( 1) 00:13:02.310 7.159 - 7.206: 98.8651% ( 1) 00:13:02.310 7.964 - 8.012: 98.8729% ( 1) 00:13:02.310 9.908 - 9.956: 98.8807% ( 1) 00:13:02.310 11.852 - 11.899: 98.8885% ( 1) 00:13:02.310 15.739 - 15.834: 98.9042% ( 2) 00:13:02.310 15.929 - 16.024: 98.9198% ( 2) 00:13:02.310 16.024 - 16.119: 98.9512% ( 4) 00:13:02.310 16.119 - 16.213: 98.9590% ( 1) 00:13:02.310 16.213 - 16.308: 98.9825% ( 3) 00:13:02.310 16.308 - 16.403: 99.0216% ( 5) 00:13:02.310 16.403 - 16.498: 99.0529% ( 4) 00:13:02.310 16.498 - 16.593: 99.0607% ( 1) 00:13:02.310 16.593 - 16.687: 99.0999% ( 5) 00:13:02.310 16.687 - 16.782: 99.1155% ( 2) 00:13:02.310 16.782 - 16.877: 99.1547% ( 5) 00:13:02.310 16.877 - 16.972: 99.1781% ( 3) 00:13:02.310 16.972 - 17.067: 99.2251% ( 6) 00:13:02.310 17.067 - 17.161: 99.2642% ( 5) 00:13:02.310 17.161 - 17.256: 99.2721% ( 1) 00:13:02.310 17.256 - 17.351: 99.2956% ( 3) 00:13:02.310 17.351 - 17.446: 99.3190% ( 3) 00:13:02.310 17.446 - 17.541: 99.3503% ( 4) 00:13:02.310 17.825 - 17.920: 99.3660% ( 2) 00:13:02.310 17.920 - 18.015: 99.3817% ( 2) 00:13:02.310 18.110 - 18.204: 99.3895% ( 1) 00:13:02.310 18.584 - 18.679: 99.3973% ( 1) 00:13:02.310 18.773 - 18.868: 99.4051% ( 1) 00:13:02.310 18.868 - 18.963: 99.4208% ( 2) 00:13:02.310 19.153 - 19.247: 99.4286% ( 1) 00:13:02.310 19.247 - 19.342: 99.4364% ( 1) 00:13:02.310 19.342 - 19.437: 99.4443% ( 1) 00:13:02.310 26.548 - 26.738: 99.4521% ( 1) 00:13:02.310 3980.705 - 4004.978: 99.9452% ( 63) 00:13:02.310 4004.978 - 4029.250: 100.0000% ( 7) 00:13:02.310 00:13:02.310 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:02.310 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:02.310 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:02.310 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:02.310 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:02.569 [ 00:13:02.569 { 00:13:02.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.569 "subtype": "Discovery", 00:13:02.569 "listen_addresses": [], 00:13:02.569 "allow_any_host": true, 00:13:02.569 "hosts": [] 00:13:02.569 }, 00:13:02.569 { 00:13:02.569 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:02.569 "subtype": "NVMe", 00:13:02.569 "listen_addresses": [ 00:13:02.569 { 00:13:02.569 "trtype": "VFIOUSER", 00:13:02.569 "adrfam": "IPv4", 00:13:02.569 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:02.569 "trsvcid": "0" 00:13:02.569 } 00:13:02.569 ], 00:13:02.569 "allow_any_host": true, 00:13:02.569 "hosts": [], 00:13:02.569 "serial_number": "SPDK1", 00:13:02.569 "model_number": "SPDK bdev Controller", 00:13:02.569 "max_namespaces": 32, 00:13:02.569 "min_cntlid": 1, 00:13:02.569 "max_cntlid": 65519, 00:13:02.569 "namespaces": [ 00:13:02.569 { 00:13:02.569 "nsid": 1, 00:13:02.569 "bdev_name": "Malloc1", 00:13:02.569 "name": "Malloc1", 00:13:02.569 "nguid": "699BE2B6E14A42F79CCFFE72837B449F", 00:13:02.569 "uuid": "699be2b6-e14a-42f7-9ccf-fe72837b449f" 00:13:02.569 }, 00:13:02.569 { 00:13:02.569 "nsid": 2, 00:13:02.569 "bdev_name": "Malloc3", 00:13:02.569 "name": "Malloc3", 00:13:02.569 "nguid": "0034CCCA5B2C40BDA720CF277B4E432A", 00:13:02.569 "uuid": "0034ccca-5b2c-40bd-a720-cf277b4e432a" 00:13:02.569 } 00:13:02.569 ] 00:13:02.569 }, 00:13:02.569 { 00:13:02.569 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:02.569 "subtype": "NVMe", 00:13:02.569 "listen_addresses": [ 00:13:02.569 { 00:13:02.569 "trtype": "VFIOUSER", 00:13:02.569 "adrfam": "IPv4", 00:13:02.569 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:02.569 "trsvcid": "0" 00:13:02.569 } 00:13:02.569 ], 00:13:02.569 "allow_any_host": true, 00:13:02.569 "hosts": [], 00:13:02.569 "serial_number": "SPDK2", 00:13:02.569 "model_number": "SPDK bdev Controller", 00:13:02.569 "max_namespaces": 32, 00:13:02.569 "min_cntlid": 1, 00:13:02.569 "max_cntlid": 65519, 00:13:02.569 "namespaces": [ 00:13:02.569 { 00:13:02.569 "nsid": 1, 00:13:02.569 "bdev_name": "Malloc2", 00:13:02.569 "name": "Malloc2", 00:13:02.569 "nguid": "62F965D6F30C4C72AD9FE4880339220D", 00:13:02.569 "uuid": "62f965d6-f30c-4c72-ad9f-e4880339220d" 00:13:02.569 } 00:13:02.569 ] 00:13:02.569 } 00:13:02.569 ] 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2480143 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:02.569 07:15:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:02.828 [2024-11-20 07:15:06.163650] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.086 Malloc4 00:13:03.086 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:03.344 [2024-11-20 07:15:06.561686] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.344 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:03.344 Asynchronous Event Request test 00:13:03.344 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.344 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.344 Registering asynchronous event callbacks... 00:13:03.344 Starting namespace attribute notice tests for all controllers... 00:13:03.344 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:03.344 aer_cb - Changed Namespace 00:13:03.344 Cleaning up... 00:13:03.602 [ 00:13:03.602 { 00:13:03.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.602 "subtype": "Discovery", 00:13:03.602 "listen_addresses": [], 00:13:03.602 "allow_any_host": true, 00:13:03.602 "hosts": [] 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:03.602 "subtype": "NVMe", 00:13:03.602 "listen_addresses": [ 00:13:03.602 { 00:13:03.602 "trtype": "VFIOUSER", 00:13:03.602 "adrfam": "IPv4", 00:13:03.602 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:03.602 "trsvcid": "0" 00:13:03.602 } 00:13:03.602 ], 00:13:03.602 "allow_any_host": true, 00:13:03.602 "hosts": [], 00:13:03.602 "serial_number": "SPDK1", 00:13:03.602 "model_number": "SPDK bdev Controller", 00:13:03.602 "max_namespaces": 32, 00:13:03.602 "min_cntlid": 1, 00:13:03.602 "max_cntlid": 65519, 00:13:03.602 "namespaces": [ 00:13:03.602 { 00:13:03.602 "nsid": 1, 00:13:03.602 "bdev_name": "Malloc1", 00:13:03.602 "name": "Malloc1", 00:13:03.602 "nguid": "699BE2B6E14A42F79CCFFE72837B449F", 00:13:03.602 "uuid": "699be2b6-e14a-42f7-9ccf-fe72837b449f" 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "nsid": 2, 00:13:03.602 "bdev_name": "Malloc3", 00:13:03.602 "name": "Malloc3", 00:13:03.602 "nguid": "0034CCCA5B2C40BDA720CF277B4E432A", 00:13:03.602 "uuid": "0034ccca-5b2c-40bd-a720-cf277b4e432a" 00:13:03.602 } 00:13:03.602 ] 00:13:03.602 }, 00:13:03.602 { 00:13:03.602 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:03.602 "subtype": "NVMe", 00:13:03.602 "listen_addresses": [ 00:13:03.602 { 00:13:03.602 "trtype": "VFIOUSER", 00:13:03.602 "adrfam": "IPv4", 00:13:03.602 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:03.602 "trsvcid": "0" 00:13:03.602 } 00:13:03.603 ], 00:13:03.603 "allow_any_host": true, 00:13:03.603 "hosts": [], 00:13:03.603 "serial_number": "SPDK2", 00:13:03.603 "model_number": "SPDK bdev Controller", 00:13:03.603 "max_namespaces": 32, 00:13:03.603 "min_cntlid": 1, 00:13:03.603 "max_cntlid": 65519, 00:13:03.603 "namespaces": [ 00:13:03.603 { 00:13:03.603 "nsid": 1, 00:13:03.603 "bdev_name": "Malloc2", 00:13:03.603 "name": "Malloc2", 00:13:03.603 "nguid": "62F965D6F30C4C72AD9FE4880339220D", 00:13:03.603 "uuid": "62f965d6-f30c-4c72-ad9f-e4880339220d" 00:13:03.603 }, 00:13:03.603 { 00:13:03.603 "nsid": 2, 00:13:03.603 "bdev_name": "Malloc4", 00:13:03.603 "name": "Malloc4", 00:13:03.603 "nguid": "FB1CF708E55745C48D8D8B7EBA1387E3", 00:13:03.603 "uuid": "fb1cf708-e557-45c4-8d8d-8b7eba1387e3" 00:13:03.603 } 00:13:03.603 ] 00:13:03.603 } 00:13:03.603 ] 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2480143 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2474401 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2474401 ']' 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2474401 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2474401 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2474401' 00:13:03.603 killing process with pid 2474401 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2474401 00:13:03.603 07:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2474401 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2480396 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2480396' 00:13:03.861 Process pid: 2480396 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2480396 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2480396 ']' 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.861 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:03.861 [2024-11-20 07:15:07.255819] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:03.861 [2024-11-20 07:15:07.256940] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:13:03.861 [2024-11-20 07:15:07.257004] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.120 [2024-11-20 07:15:07.330403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.120 [2024-11-20 07:15:07.390789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.120 [2024-11-20 07:15:07.390843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.121 [2024-11-20 07:15:07.390871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.121 [2024-11-20 07:15:07.390883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.121 [2024-11-20 07:15:07.390892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.121 [2024-11-20 07:15:07.392493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.121 [2024-11-20 07:15:07.392569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.121 [2024-11-20 07:15:07.392626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.121 [2024-11-20 07:15:07.392629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.121 [2024-11-20 07:15:07.488515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:04.121 [2024-11-20 07:15:07.488753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:04.121 [2024-11-20 07:15:07.489030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:04.121 [2024-11-20 07:15:07.489611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:04.121 [2024-11-20 07:15:07.489874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:04.121 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.121 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:04.121 07:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:05.500 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:05.500 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:05.500 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:05.500 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:05.500 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:05.500 07:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:05.759 Malloc1 00:13:05.759 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:06.016 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:06.275 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:06.842 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:06.842 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:06.842 07:15:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:06.842 Malloc2 00:13:07.100 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:07.359 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:07.616 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2480396 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2480396 ']' 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2480396 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2480396 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2480396' 00:13:07.874 killing process with pid 2480396 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2480396 00:13:07.874 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2480396 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:08.132 00:13:08.132 real 0m53.867s 00:13:08.132 user 3m28.195s 00:13:08.132 sys 0m3.966s 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:08.132 ************************************ 00:13:08.132 END TEST nvmf_vfio_user 00:13:08.132 ************************************ 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.132 ************************************ 00:13:08.132 START TEST nvmf_vfio_user_nvme_compliance 00:13:08.132 ************************************ 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:08.132 * Looking for test storage... 00:13:08.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:13:08.132 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:08.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.391 --rc genhtml_branch_coverage=1 00:13:08.391 --rc genhtml_function_coverage=1 00:13:08.391 --rc genhtml_legend=1 00:13:08.391 --rc geninfo_all_blocks=1 00:13:08.391 --rc geninfo_unexecuted_blocks=1 00:13:08.391 00:13:08.391 ' 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:08.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.391 --rc genhtml_branch_coverage=1 00:13:08.391 --rc genhtml_function_coverage=1 00:13:08.391 --rc genhtml_legend=1 00:13:08.391 --rc geninfo_all_blocks=1 00:13:08.391 --rc geninfo_unexecuted_blocks=1 00:13:08.391 00:13:08.391 ' 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:08.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.391 --rc genhtml_branch_coverage=1 00:13:08.391 --rc genhtml_function_coverage=1 00:13:08.391 --rc genhtml_legend=1 00:13:08.391 --rc geninfo_all_blocks=1 00:13:08.391 --rc geninfo_unexecuted_blocks=1 00:13:08.391 00:13:08.391 ' 00:13:08.391 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:08.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.391 --rc genhtml_branch_coverage=1 00:13:08.391 --rc genhtml_function_coverage=1 00:13:08.391 --rc genhtml_legend=1 00:13:08.391 --rc geninfo_all_blocks=1 00:13:08.391 --rc geninfo_unexecuted_blocks=1 00:13:08.391 00:13:08.392 ' 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.392 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2481403 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2481403' 00:13:08.392 Process pid: 2481403 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2481403 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2481403 ']' 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:08.392 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.392 [2024-11-20 07:15:11.683035] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:13:08.392 [2024-11-20 07:15:11.683108] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.392 [2024-11-20 07:15:11.746648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.392 [2024-11-20 07:15:11.801507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.392 [2024-11-20 07:15:11.801560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.392 [2024-11-20 07:15:11.801587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.392 [2024-11-20 07:15:11.801598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.392 [2024-11-20 07:15:11.801608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.392 [2024-11-20 07:15:11.802966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.392 [2024-11-20 07:15:11.803030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.392 [2024-11-20 07:15:11.803033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.651 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:08.651 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:13:08.651 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:09.585 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:09.585 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:09.585 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:09.585 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.585 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 malloc0 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.586 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:09.844 00:13:09.844 00:13:09.844 CUnit - A unit testing framework for C - Version 2.1-3 00:13:09.844 http://cunit.sourceforge.net/ 00:13:09.844 00:13:09.844 00:13:09.844 Suite: nvme_compliance 00:13:09.844 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 07:15:13.165839] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.844 [2024-11-20 07:15:13.167274] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:09.844 [2024-11-20 07:15:13.167320] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:09.844 [2024-11-20 07:15:13.167335] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:09.844 [2024-11-20 07:15:13.168861] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.844 passed 00:13:09.844 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 07:15:13.253430] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.844 [2024-11-20 07:15:13.256451] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.101 passed 00:13:10.101 Test: admin_identify_ns ...[2024-11-20 07:15:13.342812] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.101 [2024-11-20 07:15:13.405354] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:10.101 [2024-11-20 07:15:13.413350] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:10.101 [2024-11-20 07:15:13.434447] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.101 passed 00:13:10.101 Test: admin_get_features_mandatory_features ...[2024-11-20 07:15:13.516416] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.101 [2024-11-20 07:15:13.519435] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.358 passed 00:13:10.358 Test: admin_get_features_optional_features ...[2024-11-20 07:15:13.604990] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.358 [2024-11-20 07:15:13.608009] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.358 passed 00:13:10.359 Test: admin_set_features_number_of_queues ...[2024-11-20 07:15:13.690810] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.616 [2024-11-20 07:15:13.793416] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.616 passed 00:13:10.616 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 07:15:13.876508] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.616 [2024-11-20 07:15:13.879526] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.616 passed 00:13:10.616 Test: admin_get_log_page_with_lpo ...[2024-11-20 07:15:13.964709] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.616 [2024-11-20 07:15:14.032336] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:10.617 [2024-11-20 07:15:14.045392] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.874 passed 00:13:10.874 Test: fabric_property_get ...[2024-11-20 07:15:14.129277] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.874 [2024-11-20 07:15:14.130577] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:10.874 [2024-11-20 07:15:14.132307] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.874 passed 00:13:10.874 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 07:15:14.218914] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:10.874 [2024-11-20 07:15:14.220233] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:10.874 [2024-11-20 07:15:14.221936] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.874 passed 00:13:10.874 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 07:15:14.303033] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.132 [2024-11-20 07:15:14.387329] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:11.132 [2024-11-20 07:15:14.403328] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:11.132 [2024-11-20 07:15:14.408421] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.132 passed 00:13:11.132 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 07:15:14.491548] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.132 [2024-11-20 07:15:14.492865] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:11.132 [2024-11-20 07:15:14.494572] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.132 passed 00:13:11.390 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 07:15:14.578884] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.390 [2024-11-20 07:15:14.654313] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:11.390 [2024-11-20 07:15:14.678328] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:11.390 [2024-11-20 07:15:14.683423] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.390 passed 00:13:11.390 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 07:15:14.766899] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.390 [2024-11-20 07:15:14.768176] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:11.390 [2024-11-20 07:15:14.768229] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:11.390 [2024-11-20 07:15:14.769926] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.390 passed 00:13:11.647 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 07:15:14.852748] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.647 [2024-11-20 07:15:14.944310] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:11.647 [2024-11-20 07:15:14.952312] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:11.647 [2024-11-20 07:15:14.960326] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:11.647 [2024-11-20 07:15:14.968312] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:11.647 [2024-11-20 07:15:14.997441] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.647 passed 00:13:11.905 Test: admin_create_io_sq_verify_pc ...[2024-11-20 07:15:15.081069] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.905 [2024-11-20 07:15:15.097327] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:11.905 [2024-11-20 07:15:15.114602] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.905 passed 00:13:11.905 Test: admin_create_io_qp_max_qps ...[2024-11-20 07:15:15.197142] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:13.275 [2024-11-20 07:15:16.297334] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:13.275 [2024-11-20 07:15:16.684697] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:13.532 passed 00:13:13.532 Test: admin_create_io_sq_shared_cq ...[2024-11-20 07:15:16.766846] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:13.532 [2024-11-20 07:15:16.902328] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:13.532 [2024-11-20 07:15:16.939404] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:13.788 passed 00:13:13.788 00:13:13.788 Run Summary: Type Total Ran Passed Failed Inactive 00:13:13.788 suites 1 1 n/a 0 0 00:13:13.789 tests 18 18 18 0 0 00:13:13.789 asserts 360 360 360 0 n/a 00:13:13.789 00:13:13.789 Elapsed time = 1.563 seconds 00:13:13.789 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2481403 00:13:13.789 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2481403 ']' 00:13:13.789 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2481403 00:13:13.789 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:13:13.789 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.789 07:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2481403 00:13:13.789 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.789 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.789 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2481403' 00:13:13.789 killing process with pid 2481403 00:13:13.789 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2481403 00:13:13.789 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2481403 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:14.047 00:13:14.047 real 0m5.778s 00:13:14.047 user 0m16.242s 00:13:14.047 sys 0m0.530s 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.047 ************************************ 00:13:14.047 END TEST nvmf_vfio_user_nvme_compliance 00:13:14.047 ************************************ 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.047 ************************************ 00:13:14.047 START TEST nvmf_vfio_user_fuzz 00:13:14.047 ************************************ 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:14.047 * Looking for test storage... 00:13:14.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.047 --rc genhtml_branch_coverage=1 00:13:14.047 --rc genhtml_function_coverage=1 00:13:14.047 --rc genhtml_legend=1 00:13:14.047 --rc geninfo_all_blocks=1 00:13:14.047 --rc geninfo_unexecuted_blocks=1 00:13:14.047 00:13:14.047 ' 00:13:14.047 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:14.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.047 --rc genhtml_branch_coverage=1 00:13:14.047 --rc genhtml_function_coverage=1 00:13:14.047 --rc genhtml_legend=1 00:13:14.047 --rc geninfo_all_blocks=1 00:13:14.048 --rc geninfo_unexecuted_blocks=1 00:13:14.048 00:13:14.048 ' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:14.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.048 --rc genhtml_branch_coverage=1 00:13:14.048 --rc genhtml_function_coverage=1 00:13:14.048 --rc genhtml_legend=1 00:13:14.048 --rc geninfo_all_blocks=1 00:13:14.048 --rc geninfo_unexecuted_blocks=1 00:13:14.048 00:13:14.048 ' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:14.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.048 --rc genhtml_branch_coverage=1 00:13:14.048 --rc genhtml_function_coverage=1 00:13:14.048 --rc genhtml_legend=1 00:13:14.048 --rc geninfo_all_blocks=1 00:13:14.048 --rc geninfo_unexecuted_blocks=1 00:13:14.048 00:13:14.048 ' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:14.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2482135 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2482135' 00:13:14.048 Process pid: 2482135 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2482135 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2482135 ']' 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:14.048 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:14.613 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:14.613 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:13:14.613 07:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.547 malloc0 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:15.547 07:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:47.610 Fuzzing completed. Shutting down the fuzz application 00:13:47.610 00:13:47.610 Dumping successful admin opcodes: 00:13:47.610 8, 9, 10, 24, 00:13:47.610 Dumping successful io opcodes: 00:13:47.610 0, 00:13:47.610 NS: 0x20000081ef00 I/O qp, Total commands completed: 727741, total successful commands: 2832, random_seed: 529760128 00:13:47.610 NS: 0x20000081ef00 admin qp, Total commands completed: 146467, total successful commands: 1188, random_seed: 2985946560 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2482135 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2482135 ']' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2482135 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2482135 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2482135' 00:13:47.610 killing process with pid 2482135 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2482135 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2482135 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:47.610 00:13:47.610 real 0m32.223s 00:13:47.610 user 0m33.494s 00:13:47.610 sys 0m27.636s 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.610 ************************************ 00:13:47.610 END TEST nvmf_vfio_user_fuzz 00:13:47.610 ************************************ 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:47.610 ************************************ 00:13:47.610 START TEST nvmf_auth_target 00:13:47.610 ************************************ 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:47.610 * Looking for test storage... 00:13:47.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:47.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.610 --rc genhtml_branch_coverage=1 00:13:47.610 --rc genhtml_function_coverage=1 00:13:47.610 --rc genhtml_legend=1 00:13:47.610 --rc geninfo_all_blocks=1 00:13:47.610 --rc geninfo_unexecuted_blocks=1 00:13:47.610 00:13:47.610 ' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:47.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.610 --rc genhtml_branch_coverage=1 00:13:47.610 --rc genhtml_function_coverage=1 00:13:47.610 --rc genhtml_legend=1 00:13:47.610 --rc geninfo_all_blocks=1 00:13:47.610 --rc geninfo_unexecuted_blocks=1 00:13:47.610 00:13:47.610 ' 00:13:47.610 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:47.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.611 --rc genhtml_branch_coverage=1 00:13:47.611 --rc genhtml_function_coverage=1 00:13:47.611 --rc genhtml_legend=1 00:13:47.611 --rc geninfo_all_blocks=1 00:13:47.611 --rc geninfo_unexecuted_blocks=1 00:13:47.611 00:13:47.611 ' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:47.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.611 --rc genhtml_branch_coverage=1 00:13:47.611 --rc genhtml_function_coverage=1 00:13:47.611 --rc genhtml_legend=1 00:13:47.611 --rc geninfo_all_blocks=1 00:13:47.611 --rc geninfo_unexecuted_blocks=1 00:13:47.611 00:13:47.611 ' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:47.611 07:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:48.549 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:48.549 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.549 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:48.550 Found net devices under 0000:09:00.0: cvl_0_0 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:48.550 Found net devices under 0000:09:00.1: cvl_0_1 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:48.550 00:13:48.550 --- 10.0.0.2 ping statistics --- 00:13:48.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.550 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:13:48.550 00:13:48.550 --- 10.0.0.1 ping statistics --- 00:13:48.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.550 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2487580 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2487580 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2487580 ']' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:48.550 07:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2487606 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:49.117 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce647f01f643f78e026e4935cd5a227cb50b897c5454b829 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zrV 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce647f01f643f78e026e4935cd5a227cb50b897c5454b829 0 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce647f01f643f78e026e4935cd5a227cb50b897c5454b829 0 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce647f01f643f78e026e4935cd5a227cb50b897c5454b829 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zrV 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zrV 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.zrV 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d3b7c8737b0cc3a3c7dfff45e0cf2d2a6035d381f9fe2930d716d00c6e672510 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.piq 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d3b7c8737b0cc3a3c7dfff45e0cf2d2a6035d381f9fe2930d716d00c6e672510 3 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d3b7c8737b0cc3a3c7dfff45e0cf2d2a6035d381f9fe2930d716d00c6e672510 3 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d3b7c8737b0cc3a3c7dfff45e0cf2d2a6035d381f9fe2930d716d00c6e672510 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.piq 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.piq 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.piq 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=148294116cdc0e56ab2f1936a0e64880 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YoI 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 148294116cdc0e56ab2f1936a0e64880 1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 148294116cdc0e56ab2f1936a0e64880 1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=148294116cdc0e56ab2f1936a0e64880 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YoI 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YoI 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.YoI 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4c5031bdcc2776a4e39ce9d5a98eeda247d26d5cee0913cc 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jQY 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4c5031bdcc2776a4e39ce9d5a98eeda247d26d5cee0913cc 2 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4c5031bdcc2776a4e39ce9d5a98eeda247d26d5cee0913cc 2 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4c5031bdcc2776a4e39ce9d5a98eeda247d26d5cee0913cc 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jQY 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jQY 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.jQY 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3a0d3829031bb51c918c86500a84baf9ad36f13c8b2a7d55 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5Bw 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3a0d3829031bb51c918c86500a84baf9ad36f13c8b2a7d55 2 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3a0d3829031bb51c918c86500a84baf9ad36f13c8b2a7d55 2 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3a0d3829031bb51c918c86500a84baf9ad36f13c8b2a7d55 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5Bw 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5Bw 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5Bw 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.118 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.119 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.119 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:49.119 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:49.119 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:49.119 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ad9d9d75ef59be7a21b15e51840111cd 00:13:49.119 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gx3 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ad9d9d75ef59be7a21b15e51840111cd 1 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ad9d9d75ef59be7a21b15e51840111cd 1 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ad9d9d75ef59be7a21b15e51840111cd 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gx3 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gx3 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.gx3 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=43d1a8b37607d366b9ee6a31a7ffb3b86413132602e9e7a84f7494f4b55c983f 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pxV 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 43d1a8b37607d366b9ee6a31a7ffb3b86413132602e9e7a84f7494f4b55c983f 3 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 43d1a8b37607d366b9ee6a31a7ffb3b86413132602e9e7a84f7494f4b55c983f 3 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:49.377 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=43d1a8b37607d366b9ee6a31a7ffb3b86413132602e9e7a84f7494f4b55c983f 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pxV 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pxV 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.pxV 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2487580 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2487580 ']' 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:49.378 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.635 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2487606 /var/tmp/host.sock 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2487606 ']' 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:49.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:49.636 07:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.893 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zrV 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zrV 00:13:49.894 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zrV 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.piq ]] 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.piq 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.piq 00:13:50.152 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.piq 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YoI 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YoI 00:13:50.442 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YoI 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.jQY ]] 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jQY 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jQY 00:13:50.728 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jQY 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5Bw 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5Bw 00:13:50.986 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5Bw 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.gx3 ]] 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gx3 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gx3 00:13:51.244 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gx3 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pxV 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.pxV 00:13:51.502 07:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.pxV 00:13:51.760 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:51.760 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:51.760 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:51.760 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.760 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:51.760 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.018 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.583 00:13:52.583 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.583 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.583 07:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.841 { 00:13:52.841 "cntlid": 1, 00:13:52.841 "qid": 0, 00:13:52.841 "state": "enabled", 00:13:52.841 "thread": "nvmf_tgt_poll_group_000", 00:13:52.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:52.841 "listen_address": { 00:13:52.841 "trtype": "TCP", 00:13:52.841 "adrfam": "IPv4", 00:13:52.841 "traddr": "10.0.0.2", 00:13:52.841 "trsvcid": "4420" 00:13:52.841 }, 00:13:52.841 "peer_address": { 00:13:52.841 "trtype": "TCP", 00:13:52.841 "adrfam": "IPv4", 00:13:52.841 "traddr": "10.0.0.1", 00:13:52.841 "trsvcid": "45354" 00:13:52.841 }, 00:13:52.841 "auth": { 00:13:52.841 "state": "completed", 00:13:52.841 "digest": "sha256", 00:13:52.841 "dhgroup": "null" 00:13:52.841 } 00:13:52.841 } 00:13:52.841 ]' 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.841 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.099 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:13:53.099 07:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.031 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.289 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.547 00:13:54.804 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.804 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.804 07:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.062 { 00:13:55.062 "cntlid": 3, 00:13:55.062 "qid": 0, 00:13:55.062 "state": "enabled", 00:13:55.062 "thread": "nvmf_tgt_poll_group_000", 00:13:55.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:55.062 "listen_address": { 00:13:55.062 "trtype": "TCP", 00:13:55.062 "adrfam": "IPv4", 00:13:55.062 "traddr": "10.0.0.2", 00:13:55.062 "trsvcid": "4420" 00:13:55.062 }, 00:13:55.062 "peer_address": { 00:13:55.062 "trtype": "TCP", 00:13:55.062 "adrfam": "IPv4", 00:13:55.062 "traddr": "10.0.0.1", 00:13:55.062 "trsvcid": "40778" 00:13:55.062 }, 00:13:55.062 "auth": { 00:13:55.062 "state": "completed", 00:13:55.062 "digest": "sha256", 00:13:55.062 "dhgroup": "null" 00:13:55.062 } 00:13:55.062 } 00:13:55.062 ]' 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.062 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:55.063 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.063 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.063 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.063 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.321 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:13:55.321 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:13:56.259 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.259 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:56.259 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.260 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.260 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.260 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.260 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.260 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.517 07:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.083 00:13:57.083 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.083 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.083 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.083 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.341 { 00:13:57.341 "cntlid": 5, 00:13:57.341 "qid": 0, 00:13:57.341 "state": "enabled", 00:13:57.341 "thread": "nvmf_tgt_poll_group_000", 00:13:57.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:57.341 "listen_address": { 00:13:57.341 "trtype": "TCP", 00:13:57.341 "adrfam": "IPv4", 00:13:57.341 "traddr": "10.0.0.2", 00:13:57.341 "trsvcid": "4420" 00:13:57.341 }, 00:13:57.341 "peer_address": { 00:13:57.341 "trtype": "TCP", 00:13:57.341 "adrfam": "IPv4", 00:13:57.341 "traddr": "10.0.0.1", 00:13:57.341 "trsvcid": "40810" 00:13:57.341 }, 00:13:57.341 "auth": { 00:13:57.341 "state": "completed", 00:13:57.341 "digest": "sha256", 00:13:57.341 "dhgroup": "null" 00:13:57.341 } 00:13:57.341 } 00:13:57.341 ]' 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.341 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.600 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:13:57.600 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.532 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.790 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:59.048 00:13:59.305 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.305 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.305 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.562 { 00:13:59.562 "cntlid": 7, 00:13:59.562 "qid": 0, 00:13:59.562 "state": "enabled", 00:13:59.562 "thread": "nvmf_tgt_poll_group_000", 00:13:59.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:59.562 "listen_address": { 00:13:59.562 "trtype": "TCP", 00:13:59.562 "adrfam": "IPv4", 00:13:59.562 "traddr": "10.0.0.2", 00:13:59.562 "trsvcid": "4420" 00:13:59.562 }, 00:13:59.562 "peer_address": { 00:13:59.562 "trtype": "TCP", 00:13:59.562 "adrfam": "IPv4", 00:13:59.562 "traddr": "10.0.0.1", 00:13:59.562 "trsvcid": "40836" 00:13:59.562 }, 00:13:59.562 "auth": { 00:13:59.562 "state": "completed", 00:13:59.562 "digest": "sha256", 00:13:59.562 "dhgroup": "null" 00:13:59.562 } 00:13:59.562 } 00:13:59.562 ]' 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.562 07:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.820 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:13:59.820 07:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.753 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:00.754 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.011 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.576 00:14:01.576 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.577 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.577 07:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.577 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.577 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.577 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.577 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.834 { 00:14:01.834 "cntlid": 9, 00:14:01.834 "qid": 0, 00:14:01.834 "state": "enabled", 00:14:01.834 "thread": "nvmf_tgt_poll_group_000", 00:14:01.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:01.834 "listen_address": { 00:14:01.834 "trtype": "TCP", 00:14:01.834 "adrfam": "IPv4", 00:14:01.834 "traddr": "10.0.0.2", 00:14:01.834 "trsvcid": "4420" 00:14:01.834 }, 00:14:01.834 "peer_address": { 00:14:01.834 "trtype": "TCP", 00:14:01.834 "adrfam": "IPv4", 00:14:01.834 "traddr": "10.0.0.1", 00:14:01.834 "trsvcid": "40882" 00:14:01.834 }, 00:14:01.834 "auth": { 00:14:01.834 "state": "completed", 00:14:01.834 "digest": "sha256", 00:14:01.834 "dhgroup": "ffdhe2048" 00:14:01.834 } 00:14:01.834 } 00:14:01.834 ]' 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.834 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.092 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:02.092 07:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:03.025 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.025 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:03.025 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.026 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.026 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.026 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.026 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.026 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:03.283 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:03.283 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.284 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.849 00:14:03.849 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.849 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.849 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.108 { 00:14:04.108 "cntlid": 11, 00:14:04.108 "qid": 0, 00:14:04.108 "state": "enabled", 00:14:04.108 "thread": "nvmf_tgt_poll_group_000", 00:14:04.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:04.108 "listen_address": { 00:14:04.108 "trtype": "TCP", 00:14:04.108 "adrfam": "IPv4", 00:14:04.108 "traddr": "10.0.0.2", 00:14:04.108 "trsvcid": "4420" 00:14:04.108 }, 00:14:04.108 "peer_address": { 00:14:04.108 "trtype": "TCP", 00:14:04.108 "adrfam": "IPv4", 00:14:04.108 "traddr": "10.0.0.1", 00:14:04.108 "trsvcid": "60948" 00:14:04.108 }, 00:14:04.108 "auth": { 00:14:04.108 "state": "completed", 00:14:04.108 "digest": "sha256", 00:14:04.108 "dhgroup": "ffdhe2048" 00:14:04.108 } 00:14:04.108 } 00:14:04.108 ]' 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.108 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.366 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:04.366 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:05.300 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.301 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.559 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.124 00:14:06.124 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.124 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.124 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.382 { 00:14:06.382 "cntlid": 13, 00:14:06.382 "qid": 0, 00:14:06.382 "state": "enabled", 00:14:06.382 "thread": "nvmf_tgt_poll_group_000", 00:14:06.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:06.382 "listen_address": { 00:14:06.382 "trtype": "TCP", 00:14:06.382 "adrfam": "IPv4", 00:14:06.382 "traddr": "10.0.0.2", 00:14:06.382 "trsvcid": "4420" 00:14:06.382 }, 00:14:06.382 "peer_address": { 00:14:06.382 "trtype": "TCP", 00:14:06.382 "adrfam": "IPv4", 00:14:06.382 "traddr": "10.0.0.1", 00:14:06.382 "trsvcid": "60976" 00:14:06.382 }, 00:14:06.382 "auth": { 00:14:06.382 "state": "completed", 00:14:06.382 "digest": "sha256", 00:14:06.382 "dhgroup": "ffdhe2048" 00:14:06.382 } 00:14:06.382 } 00:14:06.382 ]' 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.382 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.640 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:06.640 07:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.573 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:07.831 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.397 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.397 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.655 { 00:14:08.655 "cntlid": 15, 00:14:08.655 "qid": 0, 00:14:08.655 "state": "enabled", 00:14:08.655 "thread": "nvmf_tgt_poll_group_000", 00:14:08.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:08.655 "listen_address": { 00:14:08.655 "trtype": "TCP", 00:14:08.655 "adrfam": "IPv4", 00:14:08.655 "traddr": "10.0.0.2", 00:14:08.655 "trsvcid": "4420" 00:14:08.655 }, 00:14:08.655 "peer_address": { 00:14:08.655 "trtype": "TCP", 00:14:08.655 "adrfam": "IPv4", 00:14:08.655 "traddr": "10.0.0.1", 00:14:08.655 "trsvcid": "32774" 00:14:08.655 }, 00:14:08.655 "auth": { 00:14:08.655 "state": "completed", 00:14:08.655 "digest": "sha256", 00:14:08.655 "dhgroup": "ffdhe2048" 00:14:08.655 } 00:14:08.655 } 00:14:08.655 ]' 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.655 07:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.913 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:08.913 07:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:09.847 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.105 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.670 00:14:10.670 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.670 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.670 07:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.670 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.670 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.670 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.670 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.928 { 00:14:10.928 "cntlid": 17, 00:14:10.928 "qid": 0, 00:14:10.928 "state": "enabled", 00:14:10.928 "thread": "nvmf_tgt_poll_group_000", 00:14:10.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:10.928 "listen_address": { 00:14:10.928 "trtype": "TCP", 00:14:10.928 "adrfam": "IPv4", 00:14:10.928 "traddr": "10.0.0.2", 00:14:10.928 "trsvcid": "4420" 00:14:10.928 }, 00:14:10.928 "peer_address": { 00:14:10.928 "trtype": "TCP", 00:14:10.928 "adrfam": "IPv4", 00:14:10.928 "traddr": "10.0.0.1", 00:14:10.928 "trsvcid": "32804" 00:14:10.928 }, 00:14:10.928 "auth": { 00:14:10.928 "state": "completed", 00:14:10.928 "digest": "sha256", 00:14:10.928 "dhgroup": "ffdhe3072" 00:14:10.928 } 00:14:10.928 } 00:14:10.928 ]' 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.928 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.187 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:11.187 07:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:12.120 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.120 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:12.120 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.120 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.120 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.121 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.121 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.121 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.378 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.379 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.379 07:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.636 00:14:12.894 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.894 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.894 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.153 { 00:14:13.153 "cntlid": 19, 00:14:13.153 "qid": 0, 00:14:13.153 "state": "enabled", 00:14:13.153 "thread": "nvmf_tgt_poll_group_000", 00:14:13.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:13.153 "listen_address": { 00:14:13.153 "trtype": "TCP", 00:14:13.153 "adrfam": "IPv4", 00:14:13.153 "traddr": "10.0.0.2", 00:14:13.153 "trsvcid": "4420" 00:14:13.153 }, 00:14:13.153 "peer_address": { 00:14:13.153 "trtype": "TCP", 00:14:13.153 "adrfam": "IPv4", 00:14:13.153 "traddr": "10.0.0.1", 00:14:13.153 "trsvcid": "32828" 00:14:13.153 }, 00:14:13.153 "auth": { 00:14:13.153 "state": "completed", 00:14:13.153 "digest": "sha256", 00:14:13.153 "dhgroup": "ffdhe3072" 00:14:13.153 } 00:14:13.153 } 00:14:13.153 ]' 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.153 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.411 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:13.411 07:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.344 07:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.911 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.170 00:14:15.170 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.170 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.170 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.429 { 00:14:15.429 "cntlid": 21, 00:14:15.429 "qid": 0, 00:14:15.429 "state": "enabled", 00:14:15.429 "thread": "nvmf_tgt_poll_group_000", 00:14:15.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:15.429 "listen_address": { 00:14:15.429 "trtype": "TCP", 00:14:15.429 "adrfam": "IPv4", 00:14:15.429 "traddr": "10.0.0.2", 00:14:15.429 "trsvcid": "4420" 00:14:15.429 }, 00:14:15.429 "peer_address": { 00:14:15.429 "trtype": "TCP", 00:14:15.429 "adrfam": "IPv4", 00:14:15.429 "traddr": "10.0.0.1", 00:14:15.429 "trsvcid": "38994" 00:14:15.429 }, 00:14:15.429 "auth": { 00:14:15.429 "state": "completed", 00:14:15.429 "digest": "sha256", 00:14:15.429 "dhgroup": "ffdhe3072" 00:14:15.429 } 00:14:15.429 } 00:14:15.429 ]' 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.429 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.993 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:15.993 07:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.927 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.185 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.442 00:14:17.442 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.442 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.443 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.701 { 00:14:17.701 "cntlid": 23, 00:14:17.701 "qid": 0, 00:14:17.701 "state": "enabled", 00:14:17.701 "thread": "nvmf_tgt_poll_group_000", 00:14:17.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:17.701 "listen_address": { 00:14:17.701 "trtype": "TCP", 00:14:17.701 "adrfam": "IPv4", 00:14:17.701 "traddr": "10.0.0.2", 00:14:17.701 "trsvcid": "4420" 00:14:17.701 }, 00:14:17.701 "peer_address": { 00:14:17.701 "trtype": "TCP", 00:14:17.701 "adrfam": "IPv4", 00:14:17.701 "traddr": "10.0.0.1", 00:14:17.701 "trsvcid": "39022" 00:14:17.701 }, 00:14:17.701 "auth": { 00:14:17.701 "state": "completed", 00:14:17.701 "digest": "sha256", 00:14:17.701 "dhgroup": "ffdhe3072" 00:14:17.701 } 00:14:17.701 } 00:14:17.701 ]' 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.701 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.958 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.958 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.958 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.216 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:18.216 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.150 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.407 07:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.664 00:14:19.664 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.664 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.664 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.364 { 00:14:20.364 "cntlid": 25, 00:14:20.364 "qid": 0, 00:14:20.364 "state": "enabled", 00:14:20.364 "thread": "nvmf_tgt_poll_group_000", 00:14:20.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:20.364 "listen_address": { 00:14:20.364 "trtype": "TCP", 00:14:20.364 "adrfam": "IPv4", 00:14:20.364 "traddr": "10.0.0.2", 00:14:20.364 "trsvcid": "4420" 00:14:20.364 }, 00:14:20.364 "peer_address": { 00:14:20.364 "trtype": "TCP", 00:14:20.364 "adrfam": "IPv4", 00:14:20.364 "traddr": "10.0.0.1", 00:14:20.364 "trsvcid": "39056" 00:14:20.364 }, 00:14:20.364 "auth": { 00:14:20.364 "state": "completed", 00:14:20.364 "digest": "sha256", 00:14:20.364 "dhgroup": "ffdhe4096" 00:14:20.364 } 00:14:20.364 } 00:14:20.364 ]' 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:20.364 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.298 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:21.555 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:21.555 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.813 07:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.071 00:14:22.071 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.071 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.071 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.328 { 00:14:22.328 "cntlid": 27, 00:14:22.328 "qid": 0, 00:14:22.328 "state": "enabled", 00:14:22.328 "thread": "nvmf_tgt_poll_group_000", 00:14:22.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:22.328 "listen_address": { 00:14:22.328 "trtype": "TCP", 00:14:22.328 "adrfam": "IPv4", 00:14:22.328 "traddr": "10.0.0.2", 00:14:22.328 "trsvcid": "4420" 00:14:22.328 }, 00:14:22.328 "peer_address": { 00:14:22.328 "trtype": "TCP", 00:14:22.328 "adrfam": "IPv4", 00:14:22.328 "traddr": "10.0.0.1", 00:14:22.328 "trsvcid": "39090" 00:14:22.328 }, 00:14:22.328 "auth": { 00:14:22.328 "state": "completed", 00:14:22.328 "digest": "sha256", 00:14:22.328 "dhgroup": "ffdhe4096" 00:14:22.328 } 00:14:22.328 } 00:14:22.328 ]' 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.328 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.329 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.329 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:22.329 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.586 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.586 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.586 07:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.843 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:22.843 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:23.777 07:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.036 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.293 00:14:24.293 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.294 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.294 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.550 { 00:14:24.550 "cntlid": 29, 00:14:24.550 "qid": 0, 00:14:24.550 "state": "enabled", 00:14:24.550 "thread": "nvmf_tgt_poll_group_000", 00:14:24.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:24.550 "listen_address": { 00:14:24.550 "trtype": "TCP", 00:14:24.550 "adrfam": "IPv4", 00:14:24.550 "traddr": "10.0.0.2", 00:14:24.550 "trsvcid": "4420" 00:14:24.550 }, 00:14:24.550 "peer_address": { 00:14:24.550 "trtype": "TCP", 00:14:24.550 "adrfam": "IPv4", 00:14:24.550 "traddr": "10.0.0.1", 00:14:24.550 "trsvcid": "53508" 00:14:24.550 }, 00:14:24.550 "auth": { 00:14:24.550 "state": "completed", 00:14:24.550 "digest": "sha256", 00:14:24.550 "dhgroup": "ffdhe4096" 00:14:24.550 } 00:14:24.550 } 00:14:24.550 ]' 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.550 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.808 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.808 07:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.808 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.808 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.808 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.065 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:25.065 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.993 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.250 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.508 00:14:26.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.508 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.766 { 00:14:26.766 "cntlid": 31, 00:14:26.766 "qid": 0, 00:14:26.766 "state": "enabled", 00:14:26.766 "thread": "nvmf_tgt_poll_group_000", 00:14:26.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:26.766 "listen_address": { 00:14:26.766 "trtype": "TCP", 00:14:26.766 "adrfam": "IPv4", 00:14:26.766 "traddr": "10.0.0.2", 00:14:26.766 "trsvcid": "4420" 00:14:26.766 }, 00:14:26.766 "peer_address": { 00:14:26.766 "trtype": "TCP", 00:14:26.766 "adrfam": "IPv4", 00:14:26.766 "traddr": "10.0.0.1", 00:14:26.766 "trsvcid": "53538" 00:14:26.766 }, 00:14:26.766 "auth": { 00:14:26.766 "state": "completed", 00:14:26.766 "digest": "sha256", 00:14:26.766 "dhgroup": "ffdhe4096" 00:14:26.766 } 00:14:26.766 } 00:14:26.766 ]' 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.766 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.023 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.023 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.023 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.023 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.023 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.280 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:27.280 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:28.212 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.470 07:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.035 00:14:29.035 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.035 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.035 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.293 { 00:14:29.293 "cntlid": 33, 00:14:29.293 "qid": 0, 00:14:29.293 "state": "enabled", 00:14:29.293 "thread": "nvmf_tgt_poll_group_000", 00:14:29.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:29.293 "listen_address": { 00:14:29.293 "trtype": "TCP", 00:14:29.293 "adrfam": "IPv4", 00:14:29.293 "traddr": "10.0.0.2", 00:14:29.293 "trsvcid": "4420" 00:14:29.293 }, 00:14:29.293 "peer_address": { 00:14:29.293 "trtype": "TCP", 00:14:29.293 "adrfam": "IPv4", 00:14:29.293 "traddr": "10.0.0.1", 00:14:29.293 "trsvcid": "53566" 00:14:29.293 }, 00:14:29.293 "auth": { 00:14:29.293 "state": "completed", 00:14:29.293 "digest": "sha256", 00:14:29.293 "dhgroup": "ffdhe6144" 00:14:29.293 } 00:14:29.293 } 00:14:29.293 ]' 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.293 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.551 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:29.551 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.483 07:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.741 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.306 00:14:31.306 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.306 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.306 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.564 { 00:14:31.564 "cntlid": 35, 00:14:31.564 "qid": 0, 00:14:31.564 "state": "enabled", 00:14:31.564 "thread": "nvmf_tgt_poll_group_000", 00:14:31.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:31.564 "listen_address": { 00:14:31.564 "trtype": "TCP", 00:14:31.564 "adrfam": "IPv4", 00:14:31.564 "traddr": "10.0.0.2", 00:14:31.564 "trsvcid": "4420" 00:14:31.564 }, 00:14:31.564 "peer_address": { 00:14:31.564 "trtype": "TCP", 00:14:31.564 "adrfam": "IPv4", 00:14:31.564 "traddr": "10.0.0.1", 00:14:31.564 "trsvcid": "53590" 00:14:31.564 }, 00:14:31.564 "auth": { 00:14:31.564 "state": "completed", 00:14:31.564 "digest": "sha256", 00:14:31.564 "dhgroup": "ffdhe6144" 00:14:31.564 } 00:14:31.564 } 00:14:31.564 ]' 00:14:31.564 07:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.822 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.080 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:32.080 07:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.014 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.272 07:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:33.837 00:14:33.837 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.837 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.837 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.096 { 00:14:34.096 "cntlid": 37, 00:14:34.096 "qid": 0, 00:14:34.096 "state": "enabled", 00:14:34.096 "thread": "nvmf_tgt_poll_group_000", 00:14:34.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:34.096 "listen_address": { 00:14:34.096 "trtype": "TCP", 00:14:34.096 "adrfam": "IPv4", 00:14:34.096 "traddr": "10.0.0.2", 00:14:34.096 "trsvcid": "4420" 00:14:34.096 }, 00:14:34.096 "peer_address": { 00:14:34.096 "trtype": "TCP", 00:14:34.096 "adrfam": "IPv4", 00:14:34.096 "traddr": "10.0.0.1", 00:14:34.096 "trsvcid": "44792" 00:14:34.096 }, 00:14:34.096 "auth": { 00:14:34.096 "state": "completed", 00:14:34.096 "digest": "sha256", 00:14:34.096 "dhgroup": "ffdhe6144" 00:14:34.096 } 00:14:34.096 } 00:14:34.096 ]' 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.096 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.354 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.354 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.354 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.612 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:34.612 07:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.545 07:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.112 00:14:36.112 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.112 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.112 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.678 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.678 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.678 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.678 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.678 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.678 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.678 { 00:14:36.678 "cntlid": 39, 00:14:36.678 "qid": 0, 00:14:36.678 "state": "enabled", 00:14:36.678 "thread": "nvmf_tgt_poll_group_000", 00:14:36.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:36.678 "listen_address": { 00:14:36.678 "trtype": "TCP", 00:14:36.678 "adrfam": "IPv4", 00:14:36.678 "traddr": "10.0.0.2", 00:14:36.678 "trsvcid": "4420" 00:14:36.678 }, 00:14:36.678 "peer_address": { 00:14:36.678 "trtype": "TCP", 00:14:36.678 "adrfam": "IPv4", 00:14:36.678 "traddr": "10.0.0.1", 00:14:36.678 "trsvcid": "44806" 00:14:36.679 }, 00:14:36.679 "auth": { 00:14:36.679 "state": "completed", 00:14:36.679 "digest": "sha256", 00:14:36.679 "dhgroup": "ffdhe6144" 00:14:36.679 } 00:14:36.679 } 00:14:36.679 ]' 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.679 07:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.991 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:36.991 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.924 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.182 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.114 00:14:39.114 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.114 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.114 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.372 { 00:14:39.372 "cntlid": 41, 00:14:39.372 "qid": 0, 00:14:39.372 "state": "enabled", 00:14:39.372 "thread": "nvmf_tgt_poll_group_000", 00:14:39.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:39.372 "listen_address": { 00:14:39.372 "trtype": "TCP", 00:14:39.372 "adrfam": "IPv4", 00:14:39.372 "traddr": "10.0.0.2", 00:14:39.372 "trsvcid": "4420" 00:14:39.372 }, 00:14:39.372 "peer_address": { 00:14:39.372 "trtype": "TCP", 00:14:39.372 "adrfam": "IPv4", 00:14:39.372 "traddr": "10.0.0.1", 00:14:39.372 "trsvcid": "44830" 00:14:39.372 }, 00:14:39.372 "auth": { 00:14:39.372 "state": "completed", 00:14:39.372 "digest": "sha256", 00:14:39.372 "dhgroup": "ffdhe8192" 00:14:39.372 } 00:14:39.372 } 00:14:39.372 ]' 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.372 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.630 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.630 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.630 07:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.887 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:39.887 07:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.821 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.078 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.012 00:14:42.012 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.012 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.012 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.270 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.270 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.270 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.270 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.270 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.271 { 00:14:42.271 "cntlid": 43, 00:14:42.271 "qid": 0, 00:14:42.271 "state": "enabled", 00:14:42.271 "thread": "nvmf_tgt_poll_group_000", 00:14:42.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:42.271 "listen_address": { 00:14:42.271 "trtype": "TCP", 00:14:42.271 "adrfam": "IPv4", 00:14:42.271 "traddr": "10.0.0.2", 00:14:42.271 "trsvcid": "4420" 00:14:42.271 }, 00:14:42.271 "peer_address": { 00:14:42.271 "trtype": "TCP", 00:14:42.271 "adrfam": "IPv4", 00:14:42.271 "traddr": "10.0.0.1", 00:14:42.271 "trsvcid": "44874" 00:14:42.271 }, 00:14:42.271 "auth": { 00:14:42.271 "state": "completed", 00:14:42.271 "digest": "sha256", 00:14:42.271 "dhgroup": "ffdhe8192" 00:14:42.271 } 00:14:42.271 } 00:14:42.271 ]' 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.271 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.529 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:42.529 07:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.463 07:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.721 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.655 00:14:44.655 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.655 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.655 07:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.913 { 00:14:44.913 "cntlid": 45, 00:14:44.913 "qid": 0, 00:14:44.913 "state": "enabled", 00:14:44.913 "thread": "nvmf_tgt_poll_group_000", 00:14:44.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:44.913 "listen_address": { 00:14:44.913 "trtype": "TCP", 00:14:44.913 "adrfam": "IPv4", 00:14:44.913 "traddr": "10.0.0.2", 00:14:44.913 "trsvcid": "4420" 00:14:44.913 }, 00:14:44.913 "peer_address": { 00:14:44.913 "trtype": "TCP", 00:14:44.913 "adrfam": "IPv4", 00:14:44.913 "traddr": "10.0.0.1", 00:14:44.913 "trsvcid": "40772" 00:14:44.913 }, 00:14:44.913 "auth": { 00:14:44.913 "state": "completed", 00:14:44.913 "digest": "sha256", 00:14:44.913 "dhgroup": "ffdhe8192" 00:14:44.913 } 00:14:44.913 } 00:14:44.913 ]' 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.913 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.170 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.170 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.170 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.170 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.170 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.428 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:45.428 07:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.359 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.617 07:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:47.550 00:14:47.550 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.550 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.550 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.808 { 00:14:47.808 "cntlid": 47, 00:14:47.808 "qid": 0, 00:14:47.808 "state": "enabled", 00:14:47.808 "thread": "nvmf_tgt_poll_group_000", 00:14:47.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:47.808 "listen_address": { 00:14:47.808 "trtype": "TCP", 00:14:47.808 "adrfam": "IPv4", 00:14:47.808 "traddr": "10.0.0.2", 00:14:47.808 "trsvcid": "4420" 00:14:47.808 }, 00:14:47.808 "peer_address": { 00:14:47.808 "trtype": "TCP", 00:14:47.808 "adrfam": "IPv4", 00:14:47.808 "traddr": "10.0.0.1", 00:14:47.808 "trsvcid": "40808" 00:14:47.808 }, 00:14:47.808 "auth": { 00:14:47.808 "state": "completed", 00:14:47.808 "digest": "sha256", 00:14:47.808 "dhgroup": "ffdhe8192" 00:14:47.808 } 00:14:47.808 } 00:14:47.808 ]' 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.808 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.066 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:48.066 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.999 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.257 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.515 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.515 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.515 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.515 07:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.773 00:14:49.773 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.773 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.773 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.033 { 00:14:50.033 "cntlid": 49, 00:14:50.033 "qid": 0, 00:14:50.033 "state": "enabled", 00:14:50.033 "thread": "nvmf_tgt_poll_group_000", 00:14:50.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:50.033 "listen_address": { 00:14:50.033 "trtype": "TCP", 00:14:50.033 "adrfam": "IPv4", 00:14:50.033 "traddr": "10.0.0.2", 00:14:50.033 "trsvcid": "4420" 00:14:50.033 }, 00:14:50.033 "peer_address": { 00:14:50.033 "trtype": "TCP", 00:14:50.033 "adrfam": "IPv4", 00:14:50.033 "traddr": "10.0.0.1", 00:14:50.033 "trsvcid": "40838" 00:14:50.033 }, 00:14:50.033 "auth": { 00:14:50.033 "state": "completed", 00:14:50.033 "digest": "sha384", 00:14:50.033 "dhgroup": "null" 00:14:50.033 } 00:14:50.033 } 00:14:50.033 ]' 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.033 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.323 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:50.324 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.280 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.538 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.539 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.104 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.104 { 00:14:52.104 "cntlid": 51, 00:14:52.104 "qid": 0, 00:14:52.104 "state": "enabled", 00:14:52.104 "thread": "nvmf_tgt_poll_group_000", 00:14:52.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:52.104 "listen_address": { 00:14:52.104 "trtype": "TCP", 00:14:52.104 "adrfam": "IPv4", 00:14:52.104 "traddr": "10.0.0.2", 00:14:52.104 "trsvcid": "4420" 00:14:52.104 }, 00:14:52.104 "peer_address": { 00:14:52.104 "trtype": "TCP", 00:14:52.104 "adrfam": "IPv4", 00:14:52.104 "traddr": "10.0.0.1", 00:14:52.104 "trsvcid": "40880" 00:14:52.104 }, 00:14:52.104 "auth": { 00:14:52.104 "state": "completed", 00:14:52.104 "digest": "sha384", 00:14:52.104 "dhgroup": "null" 00:14:52.104 } 00:14:52.104 } 00:14:52.104 ]' 00:14:52.104 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.362 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.619 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:52.619 07:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.550 07:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.807 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.808 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.808 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.808 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.808 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.808 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.065 00:14:54.065 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.065 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.065 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.323 { 00:14:54.323 "cntlid": 53, 00:14:54.323 "qid": 0, 00:14:54.323 "state": "enabled", 00:14:54.323 "thread": "nvmf_tgt_poll_group_000", 00:14:54.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:54.323 "listen_address": { 00:14:54.323 "trtype": "TCP", 00:14:54.323 "adrfam": "IPv4", 00:14:54.323 "traddr": "10.0.0.2", 00:14:54.323 "trsvcid": "4420" 00:14:54.323 }, 00:14:54.323 "peer_address": { 00:14:54.323 "trtype": "TCP", 00:14:54.323 "adrfam": "IPv4", 00:14:54.323 "traddr": "10.0.0.1", 00:14:54.323 "trsvcid": "48504" 00:14:54.323 }, 00:14:54.323 "auth": { 00:14:54.323 "state": "completed", 00:14:54.323 "digest": "sha384", 00:14:54.323 "dhgroup": "null" 00:14:54.323 } 00:14:54.323 } 00:14:54.323 ]' 00:14:54.323 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.581 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.839 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:54.839 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:14:55.771 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.771 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.771 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.771 07:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.771 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.771 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.771 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.771 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.028 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.287 00:14:56.287 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.287 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.287 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.545 { 00:14:56.545 "cntlid": 55, 00:14:56.545 "qid": 0, 00:14:56.545 "state": "enabled", 00:14:56.545 "thread": "nvmf_tgt_poll_group_000", 00:14:56.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:56.545 "listen_address": { 00:14:56.545 "trtype": "TCP", 00:14:56.545 "adrfam": "IPv4", 00:14:56.545 "traddr": "10.0.0.2", 00:14:56.545 "trsvcid": "4420" 00:14:56.545 }, 00:14:56.545 "peer_address": { 00:14:56.545 "trtype": "TCP", 00:14:56.545 "adrfam": "IPv4", 00:14:56.545 "traddr": "10.0.0.1", 00:14:56.545 "trsvcid": "48514" 00:14:56.545 }, 00:14:56.545 "auth": { 00:14:56.545 "state": "completed", 00:14:56.545 "digest": "sha384", 00:14:56.545 "dhgroup": "null" 00:14:56.545 } 00:14:56.545 } 00:14:56.545 ]' 00:14:56.545 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.803 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.803 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.803 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:56.803 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.803 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.803 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.803 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.061 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:57.061 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:57.994 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.252 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.510 00:14:58.510 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.510 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.510 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.768 { 00:14:58.768 "cntlid": 57, 00:14:58.768 "qid": 0, 00:14:58.768 "state": "enabled", 00:14:58.768 "thread": "nvmf_tgt_poll_group_000", 00:14:58.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:58.768 "listen_address": { 00:14:58.768 "trtype": "TCP", 00:14:58.768 "adrfam": "IPv4", 00:14:58.768 "traddr": "10.0.0.2", 00:14:58.768 "trsvcid": "4420" 00:14:58.768 }, 00:14:58.768 "peer_address": { 00:14:58.768 "trtype": "TCP", 00:14:58.768 "adrfam": "IPv4", 00:14:58.768 "traddr": "10.0.0.1", 00:14:58.768 "trsvcid": "48530" 00:14:58.768 }, 00:14:58.768 "auth": { 00:14:58.768 "state": "completed", 00:14:58.768 "digest": "sha384", 00:14:58.768 "dhgroup": "ffdhe2048" 00:14:58.768 } 00:14:58.768 } 00:14:58.768 ]' 00:14:58.768 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.026 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.285 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:14:59.285 07:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.219 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.477 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.735 00:15:00.735 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.735 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.735 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.993 { 00:15:00.993 "cntlid": 59, 00:15:00.993 "qid": 0, 00:15:00.993 "state": "enabled", 00:15:00.993 "thread": "nvmf_tgt_poll_group_000", 00:15:00.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:00.993 "listen_address": { 00:15:00.993 "trtype": "TCP", 00:15:00.993 "adrfam": "IPv4", 00:15:00.993 "traddr": "10.0.0.2", 00:15:00.993 "trsvcid": "4420" 00:15:00.993 }, 00:15:00.993 "peer_address": { 00:15:00.993 "trtype": "TCP", 00:15:00.993 "adrfam": "IPv4", 00:15:00.993 "traddr": "10.0.0.1", 00:15:00.993 "trsvcid": "48566" 00:15:00.993 }, 00:15:00.993 "auth": { 00:15:00.993 "state": "completed", 00:15:00.993 "digest": "sha384", 00:15:00.993 "dhgroup": "ffdhe2048" 00:15:00.993 } 00:15:00.993 } 00:15:00.993 ]' 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.993 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.251 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.251 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.251 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.508 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:01.508 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.442 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.700 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.959 00:15:02.959 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.959 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.959 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.217 { 00:15:03.217 "cntlid": 61, 00:15:03.217 "qid": 0, 00:15:03.217 "state": "enabled", 00:15:03.217 "thread": "nvmf_tgt_poll_group_000", 00:15:03.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:03.217 "listen_address": { 00:15:03.217 "trtype": "TCP", 00:15:03.217 "adrfam": "IPv4", 00:15:03.217 "traddr": "10.0.0.2", 00:15:03.217 "trsvcid": "4420" 00:15:03.217 }, 00:15:03.217 "peer_address": { 00:15:03.217 "trtype": "TCP", 00:15:03.217 "adrfam": "IPv4", 00:15:03.217 "traddr": "10.0.0.1", 00:15:03.217 "trsvcid": "58354" 00:15:03.217 }, 00:15:03.217 "auth": { 00:15:03.217 "state": "completed", 00:15:03.217 "digest": "sha384", 00:15:03.217 "dhgroup": "ffdhe2048" 00:15:03.217 } 00:15:03.217 } 00:15:03.217 ]' 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.217 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.475 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.475 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.475 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.733 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:03.733 07:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.667 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.925 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.184 00:15:05.184 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.184 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.184 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.442 { 00:15:05.442 "cntlid": 63, 00:15:05.442 "qid": 0, 00:15:05.442 "state": "enabled", 00:15:05.442 "thread": "nvmf_tgt_poll_group_000", 00:15:05.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:05.442 "listen_address": { 00:15:05.442 "trtype": "TCP", 00:15:05.442 "adrfam": "IPv4", 00:15:05.442 "traddr": "10.0.0.2", 00:15:05.442 "trsvcid": "4420" 00:15:05.442 }, 00:15:05.442 "peer_address": { 00:15:05.442 "trtype": "TCP", 00:15:05.442 "adrfam": "IPv4", 00:15:05.442 "traddr": "10.0.0.1", 00:15:05.442 "trsvcid": "58380" 00:15:05.442 }, 00:15:05.442 "auth": { 00:15:05.442 "state": "completed", 00:15:05.442 "digest": "sha384", 00:15:05.442 "dhgroup": "ffdhe2048" 00:15:05.442 } 00:15:05.442 } 00:15:05.442 ]' 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.442 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.700 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.700 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.700 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.958 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:05.958 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:06.892 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.892 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.892 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.892 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.892 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.892 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.893 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.893 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.893 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.150 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.151 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.409 00:15:07.409 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.409 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.409 07:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.666 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.667 { 00:15:07.667 "cntlid": 65, 00:15:07.667 "qid": 0, 00:15:07.667 "state": "enabled", 00:15:07.667 "thread": "nvmf_tgt_poll_group_000", 00:15:07.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:07.667 "listen_address": { 00:15:07.667 "trtype": "TCP", 00:15:07.667 "adrfam": "IPv4", 00:15:07.667 "traddr": "10.0.0.2", 00:15:07.667 "trsvcid": "4420" 00:15:07.667 }, 00:15:07.667 "peer_address": { 00:15:07.667 "trtype": "TCP", 00:15:07.667 "adrfam": "IPv4", 00:15:07.667 "traddr": "10.0.0.1", 00:15:07.667 "trsvcid": "58408" 00:15:07.667 }, 00:15:07.667 "auth": { 00:15:07.667 "state": "completed", 00:15:07.667 "digest": "sha384", 00:15:07.667 "dhgroup": "ffdhe3072" 00:15:07.667 } 00:15:07.667 } 00:15:07.667 ]' 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.667 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.924 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.924 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.924 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.924 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.924 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.182 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:08.182 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.116 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.374 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.632 00:15:09.632 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.632 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.632 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.891 { 00:15:09.891 "cntlid": 67, 00:15:09.891 "qid": 0, 00:15:09.891 "state": "enabled", 00:15:09.891 "thread": "nvmf_tgt_poll_group_000", 00:15:09.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:09.891 "listen_address": { 00:15:09.891 "trtype": "TCP", 00:15:09.891 "adrfam": "IPv4", 00:15:09.891 "traddr": "10.0.0.2", 00:15:09.891 "trsvcid": "4420" 00:15:09.891 }, 00:15:09.891 "peer_address": { 00:15:09.891 "trtype": "TCP", 00:15:09.891 "adrfam": "IPv4", 00:15:09.891 "traddr": "10.0.0.1", 00:15:09.891 "trsvcid": "58436" 00:15:09.891 }, 00:15:09.891 "auth": { 00:15:09.891 "state": "completed", 00:15:09.891 "digest": "sha384", 00:15:09.891 "dhgroup": "ffdhe3072" 00:15:09.891 } 00:15:09.891 } 00:15:09.891 ]' 00:15:09.891 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.148 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.406 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:10.406 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.341 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.600 07:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.167 00:15:12.167 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.167 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.167 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.425 { 00:15:12.425 "cntlid": 69, 00:15:12.425 "qid": 0, 00:15:12.425 "state": "enabled", 00:15:12.425 "thread": "nvmf_tgt_poll_group_000", 00:15:12.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:12.425 "listen_address": { 00:15:12.425 "trtype": "TCP", 00:15:12.425 "adrfam": "IPv4", 00:15:12.425 "traddr": "10.0.0.2", 00:15:12.425 "trsvcid": "4420" 00:15:12.425 }, 00:15:12.425 "peer_address": { 00:15:12.425 "trtype": "TCP", 00:15:12.425 "adrfam": "IPv4", 00:15:12.425 "traddr": "10.0.0.1", 00:15:12.425 "trsvcid": "58458" 00:15:12.425 }, 00:15:12.425 "auth": { 00:15:12.425 "state": "completed", 00:15:12.425 "digest": "sha384", 00:15:12.425 "dhgroup": "ffdhe3072" 00:15:12.425 } 00:15:12.425 } 00:15:12.425 ]' 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.425 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.683 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:12.683 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.616 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.874 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.441 00:15:14.441 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.441 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.441 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.699 { 00:15:14.699 "cntlid": 71, 00:15:14.699 "qid": 0, 00:15:14.699 "state": "enabled", 00:15:14.699 "thread": "nvmf_tgt_poll_group_000", 00:15:14.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:14.699 "listen_address": { 00:15:14.699 "trtype": "TCP", 00:15:14.699 "adrfam": "IPv4", 00:15:14.699 "traddr": "10.0.0.2", 00:15:14.699 "trsvcid": "4420" 00:15:14.699 }, 00:15:14.699 "peer_address": { 00:15:14.699 "trtype": "TCP", 00:15:14.699 "adrfam": "IPv4", 00:15:14.699 "traddr": "10.0.0.1", 00:15:14.699 "trsvcid": "55626" 00:15:14.699 }, 00:15:14.699 "auth": { 00:15:14.699 "state": "completed", 00:15:14.699 "digest": "sha384", 00:15:14.699 "dhgroup": "ffdhe3072" 00:15:14.699 } 00:15:14.699 } 00:15:14.699 ]' 00:15:14.699 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.699 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.699 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.700 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.700 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.700 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.700 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.700 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.265 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:15.265 07:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.199 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.456 07:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.714 00:15:16.972 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.972 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.972 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.229 { 00:15:17.229 "cntlid": 73, 00:15:17.229 "qid": 0, 00:15:17.229 "state": "enabled", 00:15:17.229 "thread": "nvmf_tgt_poll_group_000", 00:15:17.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:17.229 "listen_address": { 00:15:17.229 "trtype": "TCP", 00:15:17.229 "adrfam": "IPv4", 00:15:17.229 "traddr": "10.0.0.2", 00:15:17.229 "trsvcid": "4420" 00:15:17.229 }, 00:15:17.229 "peer_address": { 00:15:17.229 "trtype": "TCP", 00:15:17.229 "adrfam": "IPv4", 00:15:17.229 "traddr": "10.0.0.1", 00:15:17.229 "trsvcid": "55648" 00:15:17.229 }, 00:15:17.229 "auth": { 00:15:17.229 "state": "completed", 00:15:17.229 "digest": "sha384", 00:15:17.229 "dhgroup": "ffdhe4096" 00:15:17.229 } 00:15:17.229 } 00:15:17.229 ]' 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.229 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.487 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:17.487 07:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.420 07:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.678 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:18.678 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.678 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.678 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:18.678 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.678 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.679 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.245 00:15:19.245 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.245 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.245 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.552 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.552 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.552 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.552 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.552 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.552 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.552 { 00:15:19.552 "cntlid": 75, 00:15:19.552 "qid": 0, 00:15:19.552 "state": "enabled", 00:15:19.552 "thread": "nvmf_tgt_poll_group_000", 00:15:19.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:19.552 "listen_address": { 00:15:19.552 "trtype": "TCP", 00:15:19.552 "adrfam": "IPv4", 00:15:19.552 "traddr": "10.0.0.2", 00:15:19.552 "trsvcid": "4420" 00:15:19.552 }, 00:15:19.552 "peer_address": { 00:15:19.552 "trtype": "TCP", 00:15:19.552 "adrfam": "IPv4", 00:15:19.552 "traddr": "10.0.0.1", 00:15:19.552 "trsvcid": "55686" 00:15:19.552 }, 00:15:19.552 "auth": { 00:15:19.552 "state": "completed", 00:15:19.552 "digest": "sha384", 00:15:19.553 "dhgroup": "ffdhe4096" 00:15:19.553 } 00:15:19.553 } 00:15:19.553 ]' 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.553 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.838 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:19.838 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.772 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.029 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.030 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.595 00:15:21.595 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.595 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.595 07:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.595 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.595 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.595 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.595 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.853 { 00:15:21.853 "cntlid": 77, 00:15:21.853 "qid": 0, 00:15:21.853 "state": "enabled", 00:15:21.853 "thread": "nvmf_tgt_poll_group_000", 00:15:21.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:21.853 "listen_address": { 00:15:21.853 "trtype": "TCP", 00:15:21.853 "adrfam": "IPv4", 00:15:21.853 "traddr": "10.0.0.2", 00:15:21.853 "trsvcid": "4420" 00:15:21.853 }, 00:15:21.853 "peer_address": { 00:15:21.853 "trtype": "TCP", 00:15:21.853 "adrfam": "IPv4", 00:15:21.853 "traddr": "10.0.0.1", 00:15:21.853 "trsvcid": "55720" 00:15:21.853 }, 00:15:21.853 "auth": { 00:15:21.853 "state": "completed", 00:15:21.853 "digest": "sha384", 00:15:21.853 "dhgroup": "ffdhe4096" 00:15:21.853 } 00:15:21.853 } 00:15:21.853 ]' 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.853 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.854 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.111 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:22.111 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.045 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:23.302 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.303 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.560 00:15:23.560 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.560 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.560 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.125 { 00:15:24.125 "cntlid": 79, 00:15:24.125 "qid": 0, 00:15:24.125 "state": "enabled", 00:15:24.125 "thread": "nvmf_tgt_poll_group_000", 00:15:24.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:24.125 "listen_address": { 00:15:24.125 "trtype": "TCP", 00:15:24.125 "adrfam": "IPv4", 00:15:24.125 "traddr": "10.0.0.2", 00:15:24.125 "trsvcid": "4420" 00:15:24.125 }, 00:15:24.125 "peer_address": { 00:15:24.125 "trtype": "TCP", 00:15:24.125 "adrfam": "IPv4", 00:15:24.125 "traddr": "10.0.0.1", 00:15:24.125 "trsvcid": "50178" 00:15:24.125 }, 00:15:24.125 "auth": { 00:15:24.125 "state": "completed", 00:15:24.125 "digest": "sha384", 00:15:24.125 "dhgroup": "ffdhe4096" 00:15:24.125 } 00:15:24.125 } 00:15:24.125 ]' 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.125 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.383 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:24.383 07:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:25.324 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.581 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.582 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.582 07:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.177 00:15:26.177 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.177 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.177 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.434 { 00:15:26.434 "cntlid": 81, 00:15:26.434 "qid": 0, 00:15:26.434 "state": "enabled", 00:15:26.434 "thread": "nvmf_tgt_poll_group_000", 00:15:26.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:26.434 "listen_address": { 00:15:26.434 "trtype": "TCP", 00:15:26.434 "adrfam": "IPv4", 00:15:26.434 "traddr": "10.0.0.2", 00:15:26.434 "trsvcid": "4420" 00:15:26.434 }, 00:15:26.434 "peer_address": { 00:15:26.434 "trtype": "TCP", 00:15:26.434 "adrfam": "IPv4", 00:15:26.434 "traddr": "10.0.0.1", 00:15:26.434 "trsvcid": "50218" 00:15:26.434 }, 00:15:26.434 "auth": { 00:15:26.434 "state": "completed", 00:15:26.434 "digest": "sha384", 00:15:26.434 "dhgroup": "ffdhe6144" 00:15:26.434 } 00:15:26.434 } 00:15:26.434 ]' 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.434 07:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.692 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:26.692 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:27.624 07:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.624 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.883 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.450 00:15:28.450 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.450 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.450 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.708 { 00:15:28.708 "cntlid": 83, 00:15:28.708 "qid": 0, 00:15:28.708 "state": "enabled", 00:15:28.708 "thread": "nvmf_tgt_poll_group_000", 00:15:28.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:28.708 "listen_address": { 00:15:28.708 "trtype": "TCP", 00:15:28.708 "adrfam": "IPv4", 00:15:28.708 "traddr": "10.0.0.2", 00:15:28.708 "trsvcid": "4420" 00:15:28.708 }, 00:15:28.708 "peer_address": { 00:15:28.708 "trtype": "TCP", 00:15:28.708 "adrfam": "IPv4", 00:15:28.708 "traddr": "10.0.0.1", 00:15:28.708 "trsvcid": "50232" 00:15:28.708 }, 00:15:28.708 "auth": { 00:15:28.708 "state": "completed", 00:15:28.708 "digest": "sha384", 00:15:28.708 "dhgroup": "ffdhe6144" 00:15:28.708 } 00:15:28.708 } 00:15:28.708 ]' 00:15:28.708 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.966 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.225 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:29.225 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.158 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.415 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.981 00:15:30.981 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.981 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.981 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.239 { 00:15:31.239 "cntlid": 85, 00:15:31.239 "qid": 0, 00:15:31.239 "state": "enabled", 00:15:31.239 "thread": "nvmf_tgt_poll_group_000", 00:15:31.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:31.239 "listen_address": { 00:15:31.239 "trtype": "TCP", 00:15:31.239 "adrfam": "IPv4", 00:15:31.239 "traddr": "10.0.0.2", 00:15:31.239 "trsvcid": "4420" 00:15:31.239 }, 00:15:31.239 "peer_address": { 00:15:31.239 "trtype": "TCP", 00:15:31.239 "adrfam": "IPv4", 00:15:31.239 "traddr": "10.0.0.1", 00:15:31.239 "trsvcid": "50248" 00:15:31.239 }, 00:15:31.239 "auth": { 00:15:31.239 "state": "completed", 00:15:31.239 "digest": "sha384", 00:15:31.239 "dhgroup": "ffdhe6144" 00:15:31.239 } 00:15:31.239 } 00:15:31.239 ]' 00:15:31.239 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.497 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.755 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:31.755 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.688 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.947 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.514 00:15:33.514 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.514 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.514 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.772 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.772 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.772 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.772 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.772 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.772 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.772 { 00:15:33.772 "cntlid": 87, 00:15:33.772 "qid": 0, 00:15:33.772 "state": "enabled", 00:15:33.772 "thread": "nvmf_tgt_poll_group_000", 00:15:33.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:33.772 "listen_address": { 00:15:33.772 "trtype": "TCP", 00:15:33.772 "adrfam": "IPv4", 00:15:33.772 "traddr": "10.0.0.2", 00:15:33.772 "trsvcid": "4420" 00:15:33.772 }, 00:15:33.772 "peer_address": { 00:15:33.772 "trtype": "TCP", 00:15:33.772 "adrfam": "IPv4", 00:15:33.772 "traddr": "10.0.0.1", 00:15:33.772 "trsvcid": "51694" 00:15:33.772 }, 00:15:33.772 "auth": { 00:15:33.772 "state": "completed", 00:15:33.772 "digest": "sha384", 00:15:33.772 "dhgroup": "ffdhe6144" 00:15:33.772 } 00:15:33.773 } 00:15:33.773 ]' 00:15:33.773 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.773 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.773 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.773 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.773 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.031 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.031 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.031 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.289 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:34.289 07:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:35.223 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.481 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.415 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.415 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.415 { 00:15:36.415 "cntlid": 89, 00:15:36.415 "qid": 0, 00:15:36.415 "state": "enabled", 00:15:36.415 "thread": "nvmf_tgt_poll_group_000", 00:15:36.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:36.415 "listen_address": { 00:15:36.416 "trtype": "TCP", 00:15:36.416 "adrfam": "IPv4", 00:15:36.416 "traddr": "10.0.0.2", 00:15:36.416 "trsvcid": "4420" 00:15:36.416 }, 00:15:36.416 "peer_address": { 00:15:36.416 "trtype": "TCP", 00:15:36.416 "adrfam": "IPv4", 00:15:36.416 "traddr": "10.0.0.1", 00:15:36.416 "trsvcid": "51718" 00:15:36.416 }, 00:15:36.416 "auth": { 00:15:36.416 "state": "completed", 00:15:36.416 "digest": "sha384", 00:15:36.416 "dhgroup": "ffdhe8192" 00:15:36.416 } 00:15:36.416 } 00:15:36.416 ]' 00:15:36.416 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.416 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.416 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.675 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.675 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.675 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.675 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.675 07:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.933 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:36.933 07:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.866 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:38.123 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.124 07:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.057 00:15:39.057 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.057 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.057 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.315 { 00:15:39.315 "cntlid": 91, 00:15:39.315 "qid": 0, 00:15:39.315 "state": "enabled", 00:15:39.315 "thread": "nvmf_tgt_poll_group_000", 00:15:39.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:39.315 "listen_address": { 00:15:39.315 "trtype": "TCP", 00:15:39.315 "adrfam": "IPv4", 00:15:39.315 "traddr": "10.0.0.2", 00:15:39.315 "trsvcid": "4420" 00:15:39.315 }, 00:15:39.315 "peer_address": { 00:15:39.315 "trtype": "TCP", 00:15:39.315 "adrfam": "IPv4", 00:15:39.315 "traddr": "10.0.0.1", 00:15:39.315 "trsvcid": "51738" 00:15:39.315 }, 00:15:39.315 "auth": { 00:15:39.315 "state": "completed", 00:15:39.315 "digest": "sha384", 00:15:39.315 "dhgroup": "ffdhe8192" 00:15:39.315 } 00:15:39.315 } 00:15:39.315 ]' 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.315 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.573 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.573 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.573 07:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.832 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:39.832 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.765 07:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.054 07:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.619 00:15:41.619 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.619 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.619 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.886 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.886 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.886 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.886 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.152 { 00:15:42.152 "cntlid": 93, 00:15:42.152 "qid": 0, 00:15:42.152 "state": "enabled", 00:15:42.152 "thread": "nvmf_tgt_poll_group_000", 00:15:42.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:42.152 "listen_address": { 00:15:42.152 "trtype": "TCP", 00:15:42.152 "adrfam": "IPv4", 00:15:42.152 "traddr": "10.0.0.2", 00:15:42.152 "trsvcid": "4420" 00:15:42.152 }, 00:15:42.152 "peer_address": { 00:15:42.152 "trtype": "TCP", 00:15:42.152 "adrfam": "IPv4", 00:15:42.152 "traddr": "10.0.0.1", 00:15:42.152 "trsvcid": "51768" 00:15:42.152 }, 00:15:42.152 "auth": { 00:15:42.152 "state": "completed", 00:15:42.152 "digest": "sha384", 00:15:42.152 "dhgroup": "ffdhe8192" 00:15:42.152 } 00:15:42.152 } 00:15:42.152 ]' 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.152 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.409 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:42.410 07:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.431 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.689 07:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.621 00:15:44.621 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.621 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.621 07:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.879 { 00:15:44.879 "cntlid": 95, 00:15:44.879 "qid": 0, 00:15:44.879 "state": "enabled", 00:15:44.879 "thread": "nvmf_tgt_poll_group_000", 00:15:44.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:44.879 "listen_address": { 00:15:44.879 "trtype": "TCP", 00:15:44.879 "adrfam": "IPv4", 00:15:44.879 "traddr": "10.0.0.2", 00:15:44.879 "trsvcid": "4420" 00:15:44.879 }, 00:15:44.879 "peer_address": { 00:15:44.879 "trtype": "TCP", 00:15:44.879 "adrfam": "IPv4", 00:15:44.879 "traddr": "10.0.0.1", 00:15:44.879 "trsvcid": "50394" 00:15:44.879 }, 00:15:44.879 "auth": { 00:15:44.879 "state": "completed", 00:15:44.879 "digest": "sha384", 00:15:44.879 "dhgroup": "ffdhe8192" 00:15:44.879 } 00:15:44.879 } 00:15:44.879 ]' 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.879 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.140 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:45.140 07:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.074 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.331 07:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.897 00:15:46.897 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.897 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.897 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.155 { 00:15:47.155 "cntlid": 97, 00:15:47.155 "qid": 0, 00:15:47.155 "state": "enabled", 00:15:47.155 "thread": "nvmf_tgt_poll_group_000", 00:15:47.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:47.155 "listen_address": { 00:15:47.155 "trtype": "TCP", 00:15:47.155 "adrfam": "IPv4", 00:15:47.155 "traddr": "10.0.0.2", 00:15:47.155 "trsvcid": "4420" 00:15:47.155 }, 00:15:47.155 "peer_address": { 00:15:47.155 "trtype": "TCP", 00:15:47.155 "adrfam": "IPv4", 00:15:47.155 "traddr": "10.0.0.1", 00:15:47.155 "trsvcid": "50422" 00:15:47.155 }, 00:15:47.155 "auth": { 00:15:47.155 "state": "completed", 00:15:47.155 "digest": "sha512", 00:15:47.155 "dhgroup": "null" 00:15:47.155 } 00:15:47.155 } 00:15:47.155 ]' 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.155 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.413 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:47.413 07:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.346 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:48.604 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:48.604 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.604 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.604 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.605 07:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.863 00:15:48.864 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.864 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.864 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.121 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.121 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.121 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.121 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.121 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.121 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.121 { 00:15:49.121 "cntlid": 99, 00:15:49.121 "qid": 0, 00:15:49.121 "state": "enabled", 00:15:49.121 "thread": "nvmf_tgt_poll_group_000", 00:15:49.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:49.121 "listen_address": { 00:15:49.121 "trtype": "TCP", 00:15:49.121 "adrfam": "IPv4", 00:15:49.121 "traddr": "10.0.0.2", 00:15:49.121 "trsvcid": "4420" 00:15:49.121 }, 00:15:49.121 "peer_address": { 00:15:49.121 "trtype": "TCP", 00:15:49.121 "adrfam": "IPv4", 00:15:49.121 "traddr": "10.0.0.1", 00:15:49.121 "trsvcid": "50440" 00:15:49.121 }, 00:15:49.121 "auth": { 00:15:49.121 "state": "completed", 00:15:49.121 "digest": "sha512", 00:15:49.122 "dhgroup": "null" 00:15:49.122 } 00:15:49.122 } 00:15:49.122 ]' 00:15:49.122 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.381 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.669 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:49.670 07:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.625 07:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.883 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.140 00:15:51.141 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.141 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.141 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.398 { 00:15:51.398 "cntlid": 101, 00:15:51.398 "qid": 0, 00:15:51.398 "state": "enabled", 00:15:51.398 "thread": "nvmf_tgt_poll_group_000", 00:15:51.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:51.398 "listen_address": { 00:15:51.398 "trtype": "TCP", 00:15:51.398 "adrfam": "IPv4", 00:15:51.398 "traddr": "10.0.0.2", 00:15:51.398 "trsvcid": "4420" 00:15:51.398 }, 00:15:51.398 "peer_address": { 00:15:51.398 "trtype": "TCP", 00:15:51.398 "adrfam": "IPv4", 00:15:51.398 "traddr": "10.0.0.1", 00:15:51.398 "trsvcid": "50452" 00:15:51.398 }, 00:15:51.398 "auth": { 00:15:51.398 "state": "completed", 00:15:51.398 "digest": "sha512", 00:15:51.398 "dhgroup": "null" 00:15:51.398 } 00:15:51.398 } 00:15:51.398 ]' 00:15:51.398 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.656 07:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.914 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:51.914 07:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.847 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.105 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.364 00:15:53.364 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.364 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.364 07:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.621 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.622 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.622 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.622 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.622 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.622 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.622 { 00:15:53.622 "cntlid": 103, 00:15:53.622 "qid": 0, 00:15:53.622 "state": "enabled", 00:15:53.622 "thread": "nvmf_tgt_poll_group_000", 00:15:53.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:53.622 "listen_address": { 00:15:53.622 "trtype": "TCP", 00:15:53.622 "adrfam": "IPv4", 00:15:53.622 "traddr": "10.0.0.2", 00:15:53.622 "trsvcid": "4420" 00:15:53.622 }, 00:15:53.622 "peer_address": { 00:15:53.622 "trtype": "TCP", 00:15:53.622 "adrfam": "IPv4", 00:15:53.622 "traddr": "10.0.0.1", 00:15:53.622 "trsvcid": "40038" 00:15:53.622 }, 00:15:53.622 "auth": { 00:15:53.622 "state": "completed", 00:15:53.622 "digest": "sha512", 00:15:53.622 "dhgroup": "null" 00:15:53.622 } 00:15:53.622 } 00:15:53.622 ]' 00:15:53.622 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.879 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.137 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:54.137 07:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.071 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.330 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.588 00:15:55.588 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.588 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.588 07:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.846 { 00:15:55.846 "cntlid": 105, 00:15:55.846 "qid": 0, 00:15:55.846 "state": "enabled", 00:15:55.846 "thread": "nvmf_tgt_poll_group_000", 00:15:55.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:55.846 "listen_address": { 00:15:55.846 "trtype": "TCP", 00:15:55.846 "adrfam": "IPv4", 00:15:55.846 "traddr": "10.0.0.2", 00:15:55.846 "trsvcid": "4420" 00:15:55.846 }, 00:15:55.846 "peer_address": { 00:15:55.846 "trtype": "TCP", 00:15:55.846 "adrfam": "IPv4", 00:15:55.846 "traddr": "10.0.0.1", 00:15:55.846 "trsvcid": "40070" 00:15:55.846 }, 00:15:55.846 "auth": { 00:15:55.846 "state": "completed", 00:15:55.846 "digest": "sha512", 00:15:55.846 "dhgroup": "ffdhe2048" 00:15:55.846 } 00:15:55.846 } 00:15:55.846 ]' 00:15:55.846 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.104 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.362 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:56.362 07:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:15:57.295 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.295 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.295 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.295 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.296 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.296 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.296 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.296 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.554 07:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.118 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.118 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.375 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.375 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.375 { 00:15:58.375 "cntlid": 107, 00:15:58.375 "qid": 0, 00:15:58.375 "state": "enabled", 00:15:58.376 "thread": "nvmf_tgt_poll_group_000", 00:15:58.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:58.376 "listen_address": { 00:15:58.376 "trtype": "TCP", 00:15:58.376 "adrfam": "IPv4", 00:15:58.376 "traddr": "10.0.0.2", 00:15:58.376 "trsvcid": "4420" 00:15:58.376 }, 00:15:58.376 "peer_address": { 00:15:58.376 "trtype": "TCP", 00:15:58.376 "adrfam": "IPv4", 00:15:58.376 "traddr": "10.0.0.1", 00:15:58.376 "trsvcid": "40096" 00:15:58.376 }, 00:15:58.376 "auth": { 00:15:58.376 "state": "completed", 00:15:58.376 "digest": "sha512", 00:15:58.376 "dhgroup": "ffdhe2048" 00:15:58.376 } 00:15:58.376 } 00:15:58.376 ]' 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.376 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.634 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:58.634 07:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.567 07:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.825 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.084 00:16:00.084 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.084 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.084 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.342 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.342 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.342 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.342 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.600 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.600 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.600 { 00:16:00.600 "cntlid": 109, 00:16:00.600 "qid": 0, 00:16:00.600 "state": "enabled", 00:16:00.600 "thread": "nvmf_tgt_poll_group_000", 00:16:00.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:00.601 "listen_address": { 00:16:00.601 "trtype": "TCP", 00:16:00.601 "adrfam": "IPv4", 00:16:00.601 "traddr": "10.0.0.2", 00:16:00.601 "trsvcid": "4420" 00:16:00.601 }, 00:16:00.601 "peer_address": { 00:16:00.601 "trtype": "TCP", 00:16:00.601 "adrfam": "IPv4", 00:16:00.601 "traddr": "10.0.0.1", 00:16:00.601 "trsvcid": "40116" 00:16:00.601 }, 00:16:00.601 "auth": { 00:16:00.601 "state": "completed", 00:16:00.601 "digest": "sha512", 00:16:00.601 "dhgroup": "ffdhe2048" 00:16:00.601 } 00:16:00.601 } 00:16:00.601 ]' 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.601 07:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.859 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:00.859 07:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.792 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.050 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.308 00:16:02.565 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.566 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.566 07:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.823 { 00:16:02.823 "cntlid": 111, 00:16:02.823 "qid": 0, 00:16:02.823 "state": "enabled", 00:16:02.823 "thread": "nvmf_tgt_poll_group_000", 00:16:02.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:02.823 "listen_address": { 00:16:02.823 "trtype": "TCP", 00:16:02.823 "adrfam": "IPv4", 00:16:02.823 "traddr": "10.0.0.2", 00:16:02.823 "trsvcid": "4420" 00:16:02.823 }, 00:16:02.823 "peer_address": { 00:16:02.823 "trtype": "TCP", 00:16:02.823 "adrfam": "IPv4", 00:16:02.823 "traddr": "10.0.0.1", 00:16:02.823 "trsvcid": "40138" 00:16:02.823 }, 00:16:02.823 "auth": { 00:16:02.823 "state": "completed", 00:16:02.823 "digest": "sha512", 00:16:02.823 "dhgroup": "ffdhe2048" 00:16:02.823 } 00:16:02.823 } 00:16:02.823 ]' 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.823 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.081 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:03.081 07:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.015 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.273 07:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.838 00:16:04.838 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.838 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.838 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.096 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.096 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.096 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.096 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.096 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.096 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.096 { 00:16:05.096 "cntlid": 113, 00:16:05.096 "qid": 0, 00:16:05.096 "state": "enabled", 00:16:05.096 "thread": "nvmf_tgt_poll_group_000", 00:16:05.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:05.096 "listen_address": { 00:16:05.096 "trtype": "TCP", 00:16:05.096 "adrfam": "IPv4", 00:16:05.096 "traddr": "10.0.0.2", 00:16:05.096 "trsvcid": "4420" 00:16:05.096 }, 00:16:05.096 "peer_address": { 00:16:05.096 "trtype": "TCP", 00:16:05.097 "adrfam": "IPv4", 00:16:05.097 "traddr": "10.0.0.1", 00:16:05.097 "trsvcid": "36520" 00:16:05.097 }, 00:16:05.097 "auth": { 00:16:05.097 "state": "completed", 00:16:05.097 "digest": "sha512", 00:16:05.097 "dhgroup": "ffdhe3072" 00:16:05.097 } 00:16:05.097 } 00:16:05.097 ]' 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.097 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.354 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:05.355 07:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.296 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.555 07:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.121 00:16:07.121 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.121 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.121 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.381 { 00:16:07.381 "cntlid": 115, 00:16:07.381 "qid": 0, 00:16:07.381 "state": "enabled", 00:16:07.381 "thread": "nvmf_tgt_poll_group_000", 00:16:07.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:07.381 "listen_address": { 00:16:07.381 "trtype": "TCP", 00:16:07.381 "adrfam": "IPv4", 00:16:07.381 "traddr": "10.0.0.2", 00:16:07.381 "trsvcid": "4420" 00:16:07.381 }, 00:16:07.381 "peer_address": { 00:16:07.381 "trtype": "TCP", 00:16:07.381 "adrfam": "IPv4", 00:16:07.381 "traddr": "10.0.0.1", 00:16:07.381 "trsvcid": "36528" 00:16:07.381 }, 00:16:07.381 "auth": { 00:16:07.381 "state": "completed", 00:16:07.381 "digest": "sha512", 00:16:07.381 "dhgroup": "ffdhe3072" 00:16:07.381 } 00:16:07.381 } 00:16:07.381 ]' 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.381 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.640 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:07.640 07:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.573 07:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.831 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.396 00:16:09.396 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.396 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.396 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.656 { 00:16:09.656 "cntlid": 117, 00:16:09.656 "qid": 0, 00:16:09.656 "state": "enabled", 00:16:09.656 "thread": "nvmf_tgt_poll_group_000", 00:16:09.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:09.656 "listen_address": { 00:16:09.656 "trtype": "TCP", 00:16:09.656 "adrfam": "IPv4", 00:16:09.656 "traddr": "10.0.0.2", 00:16:09.656 "trsvcid": "4420" 00:16:09.656 }, 00:16:09.656 "peer_address": { 00:16:09.656 "trtype": "TCP", 00:16:09.656 "adrfam": "IPv4", 00:16:09.656 "traddr": "10.0.0.1", 00:16:09.656 "trsvcid": "36560" 00:16:09.656 }, 00:16:09.656 "auth": { 00:16:09.656 "state": "completed", 00:16:09.656 "digest": "sha512", 00:16:09.656 "dhgroup": "ffdhe3072" 00:16:09.656 } 00:16:09.656 } 00:16:09.656 ]' 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.656 07:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.914 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:09.914 07:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.849 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.107 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.675 00:16:11.675 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.675 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.675 07:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.934 { 00:16:11.934 "cntlid": 119, 00:16:11.934 "qid": 0, 00:16:11.934 "state": "enabled", 00:16:11.934 "thread": "nvmf_tgt_poll_group_000", 00:16:11.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:11.934 "listen_address": { 00:16:11.934 "trtype": "TCP", 00:16:11.934 "adrfam": "IPv4", 00:16:11.934 "traddr": "10.0.0.2", 00:16:11.934 "trsvcid": "4420" 00:16:11.934 }, 00:16:11.934 "peer_address": { 00:16:11.934 "trtype": "TCP", 00:16:11.934 "adrfam": "IPv4", 00:16:11.934 "traddr": "10.0.0.1", 00:16:11.934 "trsvcid": "36600" 00:16:11.934 }, 00:16:11.934 "auth": { 00:16:11.934 "state": "completed", 00:16:11.934 "digest": "sha512", 00:16:11.934 "dhgroup": "ffdhe3072" 00:16:11.934 } 00:16:11.934 } 00:16:11.934 ]' 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.934 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.192 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:12.192 07:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:13.125 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.125 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.125 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.126 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.126 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.126 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.126 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.126 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.126 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.382 07:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.950 00:16:13.950 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.950 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.950 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.208 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.208 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.208 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.208 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.208 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.208 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.208 { 00:16:14.208 "cntlid": 121, 00:16:14.208 "qid": 0, 00:16:14.208 "state": "enabled", 00:16:14.208 "thread": "nvmf_tgt_poll_group_000", 00:16:14.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:14.208 "listen_address": { 00:16:14.208 "trtype": "TCP", 00:16:14.208 "adrfam": "IPv4", 00:16:14.208 "traddr": "10.0.0.2", 00:16:14.208 "trsvcid": "4420" 00:16:14.208 }, 00:16:14.208 "peer_address": { 00:16:14.208 "trtype": "TCP", 00:16:14.208 "adrfam": "IPv4", 00:16:14.208 "traddr": "10.0.0.1", 00:16:14.208 "trsvcid": "57742" 00:16:14.208 }, 00:16:14.208 "auth": { 00:16:14.209 "state": "completed", 00:16:14.209 "digest": "sha512", 00:16:14.209 "dhgroup": "ffdhe4096" 00:16:14.209 } 00:16:14.209 } 00:16:14.209 ]' 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.209 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.774 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:14.774 07:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.707 07:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.707 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.273 00:16:16.273 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.273 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.273 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.531 { 00:16:16.531 "cntlid": 123, 00:16:16.531 "qid": 0, 00:16:16.531 "state": "enabled", 00:16:16.531 "thread": "nvmf_tgt_poll_group_000", 00:16:16.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:16.531 "listen_address": { 00:16:16.531 "trtype": "TCP", 00:16:16.531 "adrfam": "IPv4", 00:16:16.531 "traddr": "10.0.0.2", 00:16:16.531 "trsvcid": "4420" 00:16:16.531 }, 00:16:16.531 "peer_address": { 00:16:16.531 "trtype": "TCP", 00:16:16.531 "adrfam": "IPv4", 00:16:16.531 "traddr": "10.0.0.1", 00:16:16.531 "trsvcid": "57766" 00:16:16.531 }, 00:16:16.531 "auth": { 00:16:16.531 "state": "completed", 00:16:16.531 "digest": "sha512", 00:16:16.531 "dhgroup": "ffdhe4096" 00:16:16.531 } 00:16:16.531 } 00:16:16.531 ]' 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.531 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.789 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.789 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.789 07:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.046 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:17.047 07:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:17.980 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.238 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:18.238 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.238 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.238 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:18.238 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.239 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.804 00:16:18.804 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.804 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.804 07:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.064 { 00:16:19.064 "cntlid": 125, 00:16:19.064 "qid": 0, 00:16:19.064 "state": "enabled", 00:16:19.064 "thread": "nvmf_tgt_poll_group_000", 00:16:19.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:19.064 "listen_address": { 00:16:19.064 "trtype": "TCP", 00:16:19.064 "adrfam": "IPv4", 00:16:19.064 "traddr": "10.0.0.2", 00:16:19.064 "trsvcid": "4420" 00:16:19.064 }, 00:16:19.064 "peer_address": { 00:16:19.064 "trtype": "TCP", 00:16:19.064 "adrfam": "IPv4", 00:16:19.064 "traddr": "10.0.0.1", 00:16:19.064 "trsvcid": "57794" 00:16:19.064 }, 00:16:19.064 "auth": { 00:16:19.064 "state": "completed", 00:16:19.064 "digest": "sha512", 00:16:19.064 "dhgroup": "ffdhe4096" 00:16:19.064 } 00:16:19.064 } 00:16:19.064 ]' 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.064 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.363 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:19.363 07:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.323 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.581 07:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.839 00:16:20.839 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.839 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.839 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.097 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.097 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.097 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.097 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.097 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.097 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.097 { 00:16:21.097 "cntlid": 127, 00:16:21.097 "qid": 0, 00:16:21.097 "state": "enabled", 00:16:21.098 "thread": "nvmf_tgt_poll_group_000", 00:16:21.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:21.098 "listen_address": { 00:16:21.098 "trtype": "TCP", 00:16:21.098 "adrfam": "IPv4", 00:16:21.098 "traddr": "10.0.0.2", 00:16:21.098 "trsvcid": "4420" 00:16:21.098 }, 00:16:21.098 "peer_address": { 00:16:21.098 "trtype": "TCP", 00:16:21.098 "adrfam": "IPv4", 00:16:21.098 "traddr": "10.0.0.1", 00:16:21.098 "trsvcid": "57824" 00:16:21.098 }, 00:16:21.098 "auth": { 00:16:21.098 "state": "completed", 00:16:21.098 "digest": "sha512", 00:16:21.098 "dhgroup": "ffdhe4096" 00:16:21.098 } 00:16:21.098 } 00:16:21.098 ]' 00:16:21.098 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.355 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.614 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:21.614 07:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.548 07:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.806 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.372 00:16:23.372 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.372 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.372 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.630 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.630 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.630 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.630 07:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.630 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.630 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.630 { 00:16:23.630 "cntlid": 129, 00:16:23.630 "qid": 0, 00:16:23.630 "state": "enabled", 00:16:23.630 "thread": "nvmf_tgt_poll_group_000", 00:16:23.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:23.630 "listen_address": { 00:16:23.630 "trtype": "TCP", 00:16:23.630 "adrfam": "IPv4", 00:16:23.630 "traddr": "10.0.0.2", 00:16:23.630 "trsvcid": "4420" 00:16:23.630 }, 00:16:23.630 "peer_address": { 00:16:23.630 "trtype": "TCP", 00:16:23.630 "adrfam": "IPv4", 00:16:23.630 "traddr": "10.0.0.1", 00:16:23.630 "trsvcid": "46126" 00:16:23.630 }, 00:16:23.630 "auth": { 00:16:23.630 "state": "completed", 00:16:23.630 "digest": "sha512", 00:16:23.630 "dhgroup": "ffdhe6144" 00:16:23.630 } 00:16:23.630 } 00:16:23.630 ]' 00:16:23.630 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.630 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.630 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.888 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.888 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.888 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.888 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.888 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.146 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:24.146 07:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:25.079 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.079 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.080 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.080 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.080 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.080 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.080 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.080 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.337 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.338 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.338 07:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.908 00:16:25.908 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.908 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.908 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.166 { 00:16:26.166 "cntlid": 131, 00:16:26.166 "qid": 0, 00:16:26.166 "state": "enabled", 00:16:26.166 "thread": "nvmf_tgt_poll_group_000", 00:16:26.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.166 "listen_address": { 00:16:26.166 "trtype": "TCP", 00:16:26.166 "adrfam": "IPv4", 00:16:26.166 "traddr": "10.0.0.2", 00:16:26.166 "trsvcid": "4420" 00:16:26.166 }, 00:16:26.166 "peer_address": { 00:16:26.166 "trtype": "TCP", 00:16:26.166 "adrfam": "IPv4", 00:16:26.166 "traddr": "10.0.0.1", 00:16:26.166 "trsvcid": "46154" 00:16:26.166 }, 00:16:26.166 "auth": { 00:16:26.166 "state": "completed", 00:16:26.166 "digest": "sha512", 00:16:26.166 "dhgroup": "ffdhe6144" 00:16:26.166 } 00:16:26.166 } 00:16:26.166 ]' 00:16:26.166 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.424 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.424 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.424 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.424 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.424 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.424 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.425 07:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.683 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:26.683 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:27.616 07:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:27.874 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:27.874 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.875 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.441 00:16:28.441 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.441 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.441 07:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.699 { 00:16:28.699 "cntlid": 133, 00:16:28.699 "qid": 0, 00:16:28.699 "state": "enabled", 00:16:28.699 "thread": "nvmf_tgt_poll_group_000", 00:16:28.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:28.699 "listen_address": { 00:16:28.699 "trtype": "TCP", 00:16:28.699 "adrfam": "IPv4", 00:16:28.699 "traddr": "10.0.0.2", 00:16:28.699 "trsvcid": "4420" 00:16:28.699 }, 00:16:28.699 "peer_address": { 00:16:28.699 "trtype": "TCP", 00:16:28.699 "adrfam": "IPv4", 00:16:28.699 "traddr": "10.0.0.1", 00:16:28.699 "trsvcid": "46170" 00:16:28.699 }, 00:16:28.699 "auth": { 00:16:28.699 "state": "completed", 00:16:28.699 "digest": "sha512", 00:16:28.699 "dhgroup": "ffdhe6144" 00:16:28.699 } 00:16:28.699 } 00:16:28.699 ]' 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.699 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.957 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.957 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.957 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.215 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:29.215 07:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.148 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.406 07:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.972 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.972 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.230 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.230 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.230 { 00:16:31.230 "cntlid": 135, 00:16:31.230 "qid": 0, 00:16:31.230 "state": "enabled", 00:16:31.230 "thread": "nvmf_tgt_poll_group_000", 00:16:31.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.230 "listen_address": { 00:16:31.230 "trtype": "TCP", 00:16:31.230 "adrfam": "IPv4", 00:16:31.230 "traddr": "10.0.0.2", 00:16:31.231 "trsvcid": "4420" 00:16:31.231 }, 00:16:31.231 "peer_address": { 00:16:31.231 "trtype": "TCP", 00:16:31.231 "adrfam": "IPv4", 00:16:31.231 "traddr": "10.0.0.1", 00:16:31.231 "trsvcid": "46188" 00:16:31.231 }, 00:16:31.231 "auth": { 00:16:31.231 "state": "completed", 00:16:31.231 "digest": "sha512", 00:16:31.231 "dhgroup": "ffdhe6144" 00:16:31.231 } 00:16:31.231 } 00:16:31.231 ]' 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.231 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.488 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:31.488 07:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.421 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.679 07:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.679 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.679 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.679 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.679 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.679 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.612 00:16:33.612 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.612 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.612 07:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.870 { 00:16:33.870 "cntlid": 137, 00:16:33.870 "qid": 0, 00:16:33.870 "state": "enabled", 00:16:33.870 "thread": "nvmf_tgt_poll_group_000", 00:16:33.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:33.870 "listen_address": { 00:16:33.870 "trtype": "TCP", 00:16:33.870 "adrfam": "IPv4", 00:16:33.870 "traddr": "10.0.0.2", 00:16:33.870 "trsvcid": "4420" 00:16:33.870 }, 00:16:33.870 "peer_address": { 00:16:33.870 "trtype": "TCP", 00:16:33.870 "adrfam": "IPv4", 00:16:33.870 "traddr": "10.0.0.1", 00:16:33.870 "trsvcid": "41600" 00:16:33.870 }, 00:16:33.870 "auth": { 00:16:33.870 "state": "completed", 00:16:33.870 "digest": "sha512", 00:16:33.870 "dhgroup": "ffdhe8192" 00:16:33.870 } 00:16:33.870 } 00:16:33.870 ]' 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.870 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.128 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:34.128 07:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.062 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.321 07:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.256 00:16:36.256 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.256 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.256 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.514 { 00:16:36.514 "cntlid": 139, 00:16:36.514 "qid": 0, 00:16:36.514 "state": "enabled", 00:16:36.514 "thread": "nvmf_tgt_poll_group_000", 00:16:36.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:36.514 "listen_address": { 00:16:36.514 "trtype": "TCP", 00:16:36.514 "adrfam": "IPv4", 00:16:36.514 "traddr": "10.0.0.2", 00:16:36.514 "trsvcid": "4420" 00:16:36.514 }, 00:16:36.514 "peer_address": { 00:16:36.514 "trtype": "TCP", 00:16:36.514 "adrfam": "IPv4", 00:16:36.514 "traddr": "10.0.0.1", 00:16:36.514 "trsvcid": "41634" 00:16:36.514 }, 00:16:36.514 "auth": { 00:16:36.514 "state": "completed", 00:16:36.514 "digest": "sha512", 00:16:36.514 "dhgroup": "ffdhe8192" 00:16:36.514 } 00:16:36.514 } 00:16:36.514 ]' 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.514 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.772 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.772 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.772 07:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.030 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:37.030 07:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: --dhchap-ctrl-secret DHHC-1:02:NGM1MDMxYmRjYzI3NzZhNGUzOWNlOWQ1YTk4ZWVkYTI0N2QyNmQ1Y2VlMDkxM2NjJw2DYg==: 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.964 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.221 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:38.221 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.221 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.222 07:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.787 00:16:39.045 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.045 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.045 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.303 { 00:16:39.303 "cntlid": 141, 00:16:39.303 "qid": 0, 00:16:39.303 "state": "enabled", 00:16:39.303 "thread": "nvmf_tgt_poll_group_000", 00:16:39.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.303 "listen_address": { 00:16:39.303 "trtype": "TCP", 00:16:39.303 "adrfam": "IPv4", 00:16:39.303 "traddr": "10.0.0.2", 00:16:39.303 "trsvcid": "4420" 00:16:39.303 }, 00:16:39.303 "peer_address": { 00:16:39.303 "trtype": "TCP", 00:16:39.303 "adrfam": "IPv4", 00:16:39.303 "traddr": "10.0.0.1", 00:16:39.303 "trsvcid": "41650" 00:16:39.303 }, 00:16:39.303 "auth": { 00:16:39.303 "state": "completed", 00:16:39.303 "digest": "sha512", 00:16:39.303 "dhgroup": "ffdhe8192" 00:16:39.303 } 00:16:39.303 } 00:16:39.303 ]' 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.303 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.561 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:39.561 07:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:01:YWQ5ZDlkNzVlZjU5YmU3YTIxYjE1ZTUxODQwMTExY2SuIum2: 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.495 07:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.753 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.688 00:16:41.688 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.688 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.688 07:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.946 { 00:16:41.946 "cntlid": 143, 00:16:41.946 "qid": 0, 00:16:41.946 "state": "enabled", 00:16:41.946 "thread": "nvmf_tgt_poll_group_000", 00:16:41.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:41.946 "listen_address": { 00:16:41.946 "trtype": "TCP", 00:16:41.946 "adrfam": "IPv4", 00:16:41.946 "traddr": "10.0.0.2", 00:16:41.946 "trsvcid": "4420" 00:16:41.946 }, 00:16:41.946 "peer_address": { 00:16:41.946 "trtype": "TCP", 00:16:41.946 "adrfam": "IPv4", 00:16:41.946 "traddr": "10.0.0.1", 00:16:41.946 "trsvcid": "41668" 00:16:41.946 }, 00:16:41.946 "auth": { 00:16:41.946 "state": "completed", 00:16:41.946 "digest": "sha512", 00:16:41.946 "dhgroup": "ffdhe8192" 00:16:41.946 } 00:16:41.946 } 00:16:41.946 ]' 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.946 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.204 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:42.204 07:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:43.137 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.137 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.137 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.138 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.396 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:43.396 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.396 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.396 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.396 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.397 07:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.330 00:16:44.330 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.330 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.330 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.588 { 00:16:44.588 "cntlid": 145, 00:16:44.588 "qid": 0, 00:16:44.588 "state": "enabled", 00:16:44.588 "thread": "nvmf_tgt_poll_group_000", 00:16:44.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:44.588 "listen_address": { 00:16:44.588 "trtype": "TCP", 00:16:44.588 "adrfam": "IPv4", 00:16:44.588 "traddr": "10.0.0.2", 00:16:44.588 "trsvcid": "4420" 00:16:44.588 }, 00:16:44.588 "peer_address": { 00:16:44.588 "trtype": "TCP", 00:16:44.588 "adrfam": "IPv4", 00:16:44.588 "traddr": "10.0.0.1", 00:16:44.588 "trsvcid": "59822" 00:16:44.588 }, 00:16:44.588 "auth": { 00:16:44.588 "state": "completed", 00:16:44.588 "digest": "sha512", 00:16:44.588 "dhgroup": "ffdhe8192" 00:16:44.588 } 00:16:44.588 } 00:16:44.588 ]' 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.588 07:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.588 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.588 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.588 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.153 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:45.153 07:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:Y2U2NDdmMDFmNjQzZjc4ZTAyNmU0OTM1Y2Q1YTIyN2NiNTBiODk3YzU0NTRiODI5urJx4w==: --dhchap-ctrl-secret DHHC-1:03:ZDNiN2M4NzM3YjBjYzNhM2M3ZGZmZjQ1ZTBjZjJkMmE2MDM1ZDM4MWY5ZmUyOTMwZDcxNmQwMGM2ZTY3MjUxMKdz6b0=: 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:46.084 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:46.649 request: 00:16:46.649 { 00:16:46.649 "name": "nvme0", 00:16:46.649 "trtype": "tcp", 00:16:46.649 "traddr": "10.0.0.2", 00:16:46.649 "adrfam": "ipv4", 00:16:46.649 "trsvcid": "4420", 00:16:46.649 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:46.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:46.649 "prchk_reftag": false, 00:16:46.649 "prchk_guard": false, 00:16:46.649 "hdgst": false, 00:16:46.649 "ddgst": false, 00:16:46.649 "dhchap_key": "key2", 00:16:46.649 "allow_unrecognized_csi": false, 00:16:46.649 "method": "bdev_nvme_attach_controller", 00:16:46.649 "req_id": 1 00:16:46.649 } 00:16:46.649 Got JSON-RPC error response 00:16:46.649 response: 00:16:46.649 { 00:16:46.649 "code": -5, 00:16:46.649 "message": "Input/output error" 00:16:46.649 } 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:46.649 07:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:47.581 request: 00:16:47.581 { 00:16:47.581 "name": "nvme0", 00:16:47.581 "trtype": "tcp", 00:16:47.581 "traddr": "10.0.0.2", 00:16:47.581 "adrfam": "ipv4", 00:16:47.581 "trsvcid": "4420", 00:16:47.581 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:47.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:47.582 "prchk_reftag": false, 00:16:47.582 "prchk_guard": false, 00:16:47.582 "hdgst": false, 00:16:47.582 "ddgst": false, 00:16:47.582 "dhchap_key": "key1", 00:16:47.582 "dhchap_ctrlr_key": "ckey2", 00:16:47.582 "allow_unrecognized_csi": false, 00:16:47.582 "method": "bdev_nvme_attach_controller", 00:16:47.582 "req_id": 1 00:16:47.582 } 00:16:47.582 Got JSON-RPC error response 00:16:47.582 response: 00:16:47.582 { 00:16:47.582 "code": -5, 00:16:47.582 "message": "Input/output error" 00:16:47.582 } 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.582 07:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.515 request: 00:16:48.515 { 00:16:48.515 "name": "nvme0", 00:16:48.515 "trtype": "tcp", 00:16:48.515 "traddr": "10.0.0.2", 00:16:48.515 "adrfam": "ipv4", 00:16:48.515 "trsvcid": "4420", 00:16:48.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.515 "prchk_reftag": false, 00:16:48.515 "prchk_guard": false, 00:16:48.515 "hdgst": false, 00:16:48.515 "ddgst": false, 00:16:48.515 "dhchap_key": "key1", 00:16:48.515 "dhchap_ctrlr_key": "ckey1", 00:16:48.515 "allow_unrecognized_csi": false, 00:16:48.515 "method": "bdev_nvme_attach_controller", 00:16:48.515 "req_id": 1 00:16:48.515 } 00:16:48.515 Got JSON-RPC error response 00:16:48.515 response: 00:16:48.515 { 00:16:48.515 "code": -5, 00:16:48.515 "message": "Input/output error" 00:16:48.515 } 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2487580 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2487580 ']' 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2487580 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2487580 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2487580' 00:16:48.515 killing process with pid 2487580 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2487580 00:16:48.515 07:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2487580 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2510330 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2510330 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2510330 ']' 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.773 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.030 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2510330 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2510330 ']' 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:49.031 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.288 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.288 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:49.288 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:49.288 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.288 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 null0 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zrV 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.piq ]] 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.piq 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YoI 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.jQY ]] 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jQY 00:16:49.623 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5Bw 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.gx3 ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gx3 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pxV 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.624 07:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.057 nvme0n1 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.057 { 00:16:51.057 "cntlid": 1, 00:16:51.057 "qid": 0, 00:16:51.057 "state": "enabled", 00:16:51.057 "thread": "nvmf_tgt_poll_group_000", 00:16:51.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:51.057 "listen_address": { 00:16:51.057 "trtype": "TCP", 00:16:51.057 "adrfam": "IPv4", 00:16:51.057 "traddr": "10.0.0.2", 00:16:51.057 "trsvcid": "4420" 00:16:51.057 }, 00:16:51.057 "peer_address": { 00:16:51.057 "trtype": "TCP", 00:16:51.057 "adrfam": "IPv4", 00:16:51.057 "traddr": "10.0.0.1", 00:16:51.057 "trsvcid": "59870" 00:16:51.057 }, 00:16:51.057 "auth": { 00:16:51.057 "state": "completed", 00:16:51.057 "digest": "sha512", 00:16:51.057 "dhgroup": "ffdhe8192" 00:16:51.057 } 00:16:51.057 } 00:16:51.057 ]' 00:16:51.057 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.315 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.315 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.316 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.316 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.316 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.316 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.316 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.573 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:51.574 07:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:52.506 07:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.764 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.022 request: 00:16:53.022 { 00:16:53.022 "name": "nvme0", 00:16:53.022 "trtype": "tcp", 00:16:53.022 "traddr": "10.0.0.2", 00:16:53.022 "adrfam": "ipv4", 00:16:53.022 "trsvcid": "4420", 00:16:53.022 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.022 "prchk_reftag": false, 00:16:53.022 "prchk_guard": false, 00:16:53.022 "hdgst": false, 00:16:53.022 "ddgst": false, 00:16:53.022 "dhchap_key": "key3", 00:16:53.022 "allow_unrecognized_csi": false, 00:16:53.022 "method": "bdev_nvme_attach_controller", 00:16:53.022 "req_id": 1 00:16:53.022 } 00:16:53.022 Got JSON-RPC error response 00:16:53.022 response: 00:16:53.022 { 00:16:53.022 "code": -5, 00:16:53.022 "message": "Input/output error" 00:16:53.022 } 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.022 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.281 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.540 request: 00:16:53.540 { 00:16:53.540 "name": "nvme0", 00:16:53.540 "trtype": "tcp", 00:16:53.540 "traddr": "10.0.0.2", 00:16:53.540 "adrfam": "ipv4", 00:16:53.540 "trsvcid": "4420", 00:16:53.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.540 "prchk_reftag": false, 00:16:53.540 "prchk_guard": false, 00:16:53.540 "hdgst": false, 00:16:53.540 "ddgst": false, 00:16:53.540 "dhchap_key": "key3", 00:16:53.540 "allow_unrecognized_csi": false, 00:16:53.540 "method": "bdev_nvme_attach_controller", 00:16:53.540 "req_id": 1 00:16:53.540 } 00:16:53.540 Got JSON-RPC error response 00:16:53.540 response: 00:16:53.540 { 00:16:53.540 "code": -5, 00:16:53.540 "message": "Input/output error" 00:16:53.540 } 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.540 07:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:53.798 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.364 request: 00:16:54.364 { 00:16:54.364 "name": "nvme0", 00:16:54.364 "trtype": "tcp", 00:16:54.364 "traddr": "10.0.0.2", 00:16:54.364 "adrfam": "ipv4", 00:16:54.364 "trsvcid": "4420", 00:16:54.364 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:54.364 "prchk_reftag": false, 00:16:54.364 "prchk_guard": false, 00:16:54.364 "hdgst": false, 00:16:54.364 "ddgst": false, 00:16:54.364 "dhchap_key": "key0", 00:16:54.364 "dhchap_ctrlr_key": "key1", 00:16:54.364 "allow_unrecognized_csi": false, 00:16:54.364 "method": "bdev_nvme_attach_controller", 00:16:54.364 "req_id": 1 00:16:54.364 } 00:16:54.364 Got JSON-RPC error response 00:16:54.364 response: 00:16:54.364 { 00:16:54.364 "code": -5, 00:16:54.364 "message": "Input/output error" 00:16:54.364 } 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:54.364 07:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:54.622 nvme0n1 00:16:54.880 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:54.880 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:54.880 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.138 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.138 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.138 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.395 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:55.395 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.396 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:55.396 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:55.396 07:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:56.770 nvme0n1 00:16:56.770 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:56.770 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.770 07:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.028 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:57.285 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.285 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:57.285 07:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: --dhchap-ctrl-secret DHHC-1:03:NDNkMWE4YjM3NjA3ZDM2NmI5ZWU2YTMxYTdmZmIzYjg2NDEzMTMyNjAyZTllN2E4NGY3NDk0ZjRiNTVjOTgzZmUL5vw=: 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.219 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:58.477 07:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:59.411 request: 00:16:59.411 { 00:16:59.411 "name": "nvme0", 00:16:59.411 "trtype": "tcp", 00:16:59.411 "traddr": "10.0.0.2", 00:16:59.411 "adrfam": "ipv4", 00:16:59.411 "trsvcid": "4420", 00:16:59.411 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:59.411 "prchk_reftag": false, 00:16:59.411 "prchk_guard": false, 00:16:59.411 "hdgst": false, 00:16:59.411 "ddgst": false, 00:16:59.411 "dhchap_key": "key1", 00:16:59.411 "allow_unrecognized_csi": false, 00:16:59.411 "method": "bdev_nvme_attach_controller", 00:16:59.411 "req_id": 1 00:16:59.411 } 00:16:59.411 Got JSON-RPC error response 00:16:59.411 response: 00:16:59.411 { 00:16:59.411 "code": -5, 00:16:59.411 "message": "Input/output error" 00:16:59.411 } 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:59.411 07:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:00.784 nvme0n1 00:17:00.784 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:00.784 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:00.784 07:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.784 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.784 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.784 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:01.350 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:01.608 nvme0n1 00:17:01.608 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:01.608 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:01.608 07:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.865 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.865 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.865 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: '' 2s 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: ]] 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTQ4Mjk0MTE2Y2RjMGU1NmFiMmYxOTM2YTBlNjQ4ODAvzKPS: 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:02.123 07:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: 2s 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: ]] 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2EwZDM4MjkwMzFiYjUxYzkxOGM4NjUwMGE4NGJhZjlhZDM2ZjEzYzhiMmE3ZDU1UQeO5Q==: 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:04.022 07:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.550 07:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.484 nvme0n1 00:17:07.484 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.484 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.484 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.484 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.484 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.484 07:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.416 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:08.416 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:08.416 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:08.674 07:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:08.932 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:08.932 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:08.932 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:09.190 07:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:10.124 request: 00:17:10.124 { 00:17:10.124 "name": "nvme0", 00:17:10.124 "dhchap_key": "key1", 00:17:10.124 "dhchap_ctrlr_key": "key3", 00:17:10.124 "method": "bdev_nvme_set_keys", 00:17:10.124 "req_id": 1 00:17:10.124 } 00:17:10.124 Got JSON-RPC error response 00:17:10.124 response: 00:17:10.124 { 00:17:10.124 "code": -13, 00:17:10.124 "message": "Permission denied" 00:17:10.124 } 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.124 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:10.382 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:10.382 07:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:11.314 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:11.314 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:11.314 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:11.572 07:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.943 nvme0n1 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:12.944 07:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:13.876 request: 00:17:13.876 { 00:17:13.876 "name": "nvme0", 00:17:13.876 "dhchap_key": "key2", 00:17:13.876 "dhchap_ctrlr_key": "key0", 00:17:13.876 "method": "bdev_nvme_set_keys", 00:17:13.876 "req_id": 1 00:17:13.876 } 00:17:13.876 Got JSON-RPC error response 00:17:13.876 response: 00:17:13.876 { 00:17:13.876 "code": -13, 00:17:13.876 "message": "Permission denied" 00:17:13.876 } 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.876 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:14.133 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:14.133 07:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:15.066 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:15.066 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:15.066 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2487606 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2487606 ']' 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2487606 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2487606 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2487606' 00:17:15.324 killing process with pid 2487606 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2487606 00:17:15.324 07:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2487606 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.889 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:15.889 rmmod nvme_tcp 00:17:15.889 rmmod nvme_fabrics 00:17:15.890 rmmod nvme_keyring 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2510330 ']' 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2510330 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2510330 ']' 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2510330 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2510330 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2510330' 00:17:15.890 killing process with pid 2510330 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2510330 00:17:15.890 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2510330 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.149 07:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zrV /tmp/spdk.key-sha256.YoI /tmp/spdk.key-sha384.5Bw /tmp/spdk.key-sha512.pxV /tmp/spdk.key-sha512.piq /tmp/spdk.key-sha384.jQY /tmp/spdk.key-sha256.gx3 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:18.689 00:17:18.689 real 3m31.966s 00:17:18.689 user 8m17.846s 00:17:18.689 sys 0m28.379s 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 ************************************ 00:17:18.689 END TEST nvmf_auth_target 00:17:18.689 ************************************ 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 ************************************ 00:17:18.689 START TEST nvmf_bdevio_no_huge 00:17:18.689 ************************************ 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:18.689 * Looking for test storage... 00:17:18.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.689 --rc genhtml_branch_coverage=1 00:17:18.689 --rc genhtml_function_coverage=1 00:17:18.689 --rc genhtml_legend=1 00:17:18.689 --rc geninfo_all_blocks=1 00:17:18.689 --rc geninfo_unexecuted_blocks=1 00:17:18.689 00:17:18.689 ' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.689 --rc genhtml_branch_coverage=1 00:17:18.689 --rc genhtml_function_coverage=1 00:17:18.689 --rc genhtml_legend=1 00:17:18.689 --rc geninfo_all_blocks=1 00:17:18.689 --rc geninfo_unexecuted_blocks=1 00:17:18.689 00:17:18.689 ' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.689 --rc genhtml_branch_coverage=1 00:17:18.689 --rc genhtml_function_coverage=1 00:17:18.689 --rc genhtml_legend=1 00:17:18.689 --rc geninfo_all_blocks=1 00:17:18.689 --rc geninfo_unexecuted_blocks=1 00:17:18.689 00:17:18.689 ' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.689 --rc genhtml_branch_coverage=1 00:17:18.689 --rc genhtml_function_coverage=1 00:17:18.689 --rc genhtml_legend=1 00:17:18.689 --rc geninfo_all_blocks=1 00:17:18.689 --rc geninfo_unexecuted_blocks=1 00:17:18.689 00:17:18.689 ' 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.689 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:18.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:18.690 07:19:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:20.596 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:20.596 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:20.596 Found net devices under 0000:09:00.0: cvl_0_0 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.596 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:20.597 Found net devices under 0000:09:00.1: cvl_0_1 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:20.597 07:19:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:20.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:17:20.856 00:17:20.856 --- 10.0.0.2 ping statistics --- 00:17:20.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.856 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:17:20.856 00:17:20.856 --- 10.0.0.1 ping statistics --- 00:17:20.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.856 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2515613 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2515613 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2515613 ']' 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:20.856 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.856 [2024-11-20 07:19:24.119397] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:20.856 [2024-11-20 07:19:24.119498] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:20.856 [2024-11-20 07:19:24.199224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.856 [2024-11-20 07:19:24.261971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.856 [2024-11-20 07:19:24.262014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.856 [2024-11-20 07:19:24.262042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.856 [2024-11-20 07:19:24.262053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.856 [2024-11-20 07:19:24.262062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.856 [2024-11-20 07:19:24.263095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:20.857 [2024-11-20 07:19:24.263154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:20.857 [2024-11-20 07:19:24.263221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:20.857 [2024-11-20 07:19:24.263225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 [2024-11-20 07:19:24.411100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 Malloc0 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 [2024-11-20 07:19:24.448978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:21.115 { 00:17:21.115 "params": { 00:17:21.115 "name": "Nvme$subsystem", 00:17:21.115 "trtype": "$TEST_TRANSPORT", 00:17:21.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.115 "adrfam": "ipv4", 00:17:21.115 "trsvcid": "$NVMF_PORT", 00:17:21.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.115 "hdgst": ${hdgst:-false}, 00:17:21.115 "ddgst": ${ddgst:-false} 00:17:21.115 }, 00:17:21.115 "method": "bdev_nvme_attach_controller" 00:17:21.115 } 00:17:21.115 EOF 00:17:21.115 )") 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:21.115 07:19:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:21.115 "params": { 00:17:21.115 "name": "Nvme1", 00:17:21.115 "trtype": "tcp", 00:17:21.115 "traddr": "10.0.0.2", 00:17:21.115 "adrfam": "ipv4", 00:17:21.115 "trsvcid": "4420", 00:17:21.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.115 "hdgst": false, 00:17:21.115 "ddgst": false 00:17:21.115 }, 00:17:21.115 "method": "bdev_nvme_attach_controller" 00:17:21.115 }' 00:17:21.115 [2024-11-20 07:19:24.495570] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:21.115 [2024-11-20 07:19:24.495677] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2515752 ] 00:17:21.373 [2024-11-20 07:19:24.567903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.373 [2024-11-20 07:19:24.633767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.373 [2024-11-20 07:19:24.633815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.373 [2024-11-20 07:19:24.633819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.632 I/O targets: 00:17:21.632 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:21.632 00:17:21.632 00:17:21.632 CUnit - A unit testing framework for C - Version 2.1-3 00:17:21.632 http://cunit.sourceforge.net/ 00:17:21.632 00:17:21.632 00:17:21.632 Suite: bdevio tests on: Nvme1n1 00:17:21.632 Test: blockdev write read block ...passed 00:17:21.632 Test: blockdev write zeroes read block ...passed 00:17:21.632 Test: blockdev write zeroes read no split ...passed 00:17:21.890 Test: blockdev write zeroes read split ...passed 00:17:21.890 Test: blockdev write zeroes read split partial ...passed 00:17:21.890 Test: blockdev reset ...[2024-11-20 07:19:25.146786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:21.890 [2024-11-20 07:19:25.146901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb26e0 (9): Bad file descriptor 00:17:21.890 [2024-11-20 07:19:25.163818] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:21.890 passed 00:17:21.890 Test: blockdev write read 8 blocks ...passed 00:17:21.890 Test: blockdev write read size > 128k ...passed 00:17:21.890 Test: blockdev write read invalid size ...passed 00:17:21.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.890 Test: blockdev write read max offset ...passed 00:17:22.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:22.148 Test: blockdev writev readv 8 blocks ...passed 00:17:22.148 Test: blockdev writev readv 30 x 1block ...passed 00:17:22.148 Test: blockdev writev readv block ...passed 00:17:22.148 Test: blockdev writev readv size > 128k ...passed 00:17:22.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:22.148 Test: blockdev comparev and writev ...[2024-11-20 07:19:25.457516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.457553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.457577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.457595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.457982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.458005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.458022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.458342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.458368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.458390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.458407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.458739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.458764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.458785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.148 [2024-11-20 07:19:25.458802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:22.148 passed 00:17:22.148 Test: blockdev nvme passthru rw ...passed 00:17:22.148 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:25.540594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.148 [2024-11-20 07:19:25.540622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.540756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.148 [2024-11-20 07:19:25.540778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.540909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.148 [2024-11-20 07:19:25.540931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:22.148 [2024-11-20 07:19:25.541065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.148 [2024-11-20 07:19:25.541087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:22.148 passed 00:17:22.148 Test: blockdev nvme admin passthru ...passed 00:17:22.406 Test: blockdev copy ...passed 00:17:22.406 00:17:22.406 Run Summary: Type Total Ran Passed Failed Inactive 00:17:22.406 suites 1 1 n/a 0 0 00:17:22.406 tests 23 23 23 0 0 00:17:22.406 asserts 152 152 152 0 n/a 00:17:22.406 00:17:22.406 Elapsed time = 1.310 seconds 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:22.664 rmmod nvme_tcp 00:17:22.664 rmmod nvme_fabrics 00:17:22.664 rmmod nvme_keyring 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2515613 ']' 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2515613 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2515613 ']' 00:17:22.664 07:19:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2515613 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2515613 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2515613' 00:17:22.664 killing process with pid 2515613 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2515613 00:17:22.664 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2515613 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.276 07:19:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.206 00:17:25.206 real 0m6.884s 00:17:25.206 user 0m11.796s 00:17:25.206 sys 0m2.670s 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.206 ************************************ 00:17:25.206 END TEST nvmf_bdevio_no_huge 00:17:25.206 ************************************ 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.206 ************************************ 00:17:25.206 START TEST nvmf_tls 00:17:25.206 ************************************ 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.206 * Looking for test storage... 00:17:25.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:25.206 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:25.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.465 --rc genhtml_branch_coverage=1 00:17:25.465 --rc genhtml_function_coverage=1 00:17:25.465 --rc genhtml_legend=1 00:17:25.465 --rc geninfo_all_blocks=1 00:17:25.465 --rc geninfo_unexecuted_blocks=1 00:17:25.465 00:17:25.465 ' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:25.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.465 --rc genhtml_branch_coverage=1 00:17:25.465 --rc genhtml_function_coverage=1 00:17:25.465 --rc genhtml_legend=1 00:17:25.465 --rc geninfo_all_blocks=1 00:17:25.465 --rc geninfo_unexecuted_blocks=1 00:17:25.465 00:17:25.465 ' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:25.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.465 --rc genhtml_branch_coverage=1 00:17:25.465 --rc genhtml_function_coverage=1 00:17:25.465 --rc genhtml_legend=1 00:17:25.465 --rc geninfo_all_blocks=1 00:17:25.465 --rc geninfo_unexecuted_blocks=1 00:17:25.465 00:17:25.465 ' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:25.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.465 --rc genhtml_branch_coverage=1 00:17:25.465 --rc genhtml_function_coverage=1 00:17:25.465 --rc genhtml_legend=1 00:17:25.465 --rc geninfo_all_blocks=1 00:17:25.465 --rc geninfo_unexecuted_blocks=1 00:17:25.465 00:17:25.465 ' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.465 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.466 07:19:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.994 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:27.995 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:27.995 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:27.995 Found net devices under 0000:09:00.0: cvl_0_0 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:27.995 Found net devices under 0000:09:00.1: cvl_0_1 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:17:27.995 00:17:27.995 --- 10.0.0.2 ping statistics --- 00:17:27.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.995 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:17:27.995 00:17:27.995 --- 10.0.0.1 ping statistics --- 00:17:27.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.995 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.995 07:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2517849 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2517849 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2517849 ']' 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.995 [2024-11-20 07:19:31.060265] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:27.995 [2024-11-20 07:19:31.060376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.995 [2024-11-20 07:19:31.134767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.995 [2024-11-20 07:19:31.191058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.995 [2024-11-20 07:19:31.191129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.995 [2024-11-20 07:19:31.191143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.995 [2024-11-20 07:19:31.191154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.995 [2024-11-20 07:19:31.191163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.995 [2024-11-20 07:19:31.191847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:27.995 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:27.996 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:27.996 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.996 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.996 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:27.996 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:28.254 true 00:17:28.254 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.254 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:28.512 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:28.512 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:28.512 07:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:28.770 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.770 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:29.028 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:29.028 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:29.028 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:29.595 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.595 07:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:29.595 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:29.595 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:29.595 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.595 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:29.853 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:29.853 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:29.853 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:30.419 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.419 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:30.419 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:30.419 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:30.419 07:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:30.678 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.678 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:30.935 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:30.936 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.8Q6b20twa9 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.DsfkZqeVMP 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.8Q6b20twa9 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.DsfkZqeVMP 00:17:31.194 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:31.452 07:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:31.710 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.8Q6b20twa9 00:17:31.710 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8Q6b20twa9 00:17:31.710 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:31.968 [2024-11-20 07:19:35.335311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.968 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.534 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:32.534 [2024-11-20 07:19:35.932866] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.534 [2024-11-20 07:19:35.933100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.534 07:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.792 malloc0 00:17:33.050 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.308 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8Q6b20twa9 00:17:33.567 07:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.825 07:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8Q6b20twa9 00:17:43.789 Initializing NVMe Controllers 00:17:43.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:43.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:43.789 Initialization complete. Launching workers. 00:17:43.789 ======================================================== 00:17:43.789 Latency(us) 00:17:43.789 Device Information : IOPS MiB/s Average min max 00:17:43.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8648.56 33.78 7402.21 1173.49 9310.32 00:17:43.789 ======================================================== 00:17:43.789 Total : 8648.56 33.78 7402.21 1173.49 9310.32 00:17:43.789 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Q6b20twa9 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8Q6b20twa9 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2519785 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2519785 /var/tmp/bdevperf.sock 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2519785 ']' 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:43.789 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.789 [2024-11-20 07:19:47.204090] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:43.789 [2024-11-20 07:19:47.204189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519785 ] 00:17:44.047 [2024-11-20 07:19:47.276144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.047 [2024-11-20 07:19:47.335473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.047 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:44.047 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:44.047 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8Q6b20twa9 00:17:44.304 07:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:44.925 [2024-11-20 07:19:47.994835] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.925 TLSTESTn1 00:17:44.925 07:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:44.925 Running I/O for 10 seconds... 00:17:47.227 3148.00 IOPS, 12.30 MiB/s [2024-11-20T06:19:51.593Z] 3204.50 IOPS, 12.52 MiB/s [2024-11-20T06:19:52.524Z] 3235.00 IOPS, 12.64 MiB/s [2024-11-20T06:19:53.457Z] 3237.50 IOPS, 12.65 MiB/s [2024-11-20T06:19:54.393Z] 3243.20 IOPS, 12.67 MiB/s [2024-11-20T06:19:55.329Z] 3257.67 IOPS, 12.73 MiB/s [2024-11-20T06:19:56.261Z] 3259.00 IOPS, 12.73 MiB/s [2024-11-20T06:19:57.634Z] 3266.00 IOPS, 12.76 MiB/s [2024-11-20T06:19:58.568Z] 3265.89 IOPS, 12.76 MiB/s [2024-11-20T06:19:58.568Z] 3261.30 IOPS, 12.74 MiB/s 00:17:55.135 Latency(us) 00:17:55.135 [2024-11-20T06:19:58.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.135 Verification LBA range: start 0x0 length 0x2000 00:17:55.135 TLSTESTn1 : 10.02 3267.63 12.76 0.00 0.00 39109.66 6359.42 46020.84 00:17:55.135 [2024-11-20T06:19:58.568Z] =================================================================================================================== 00:17:55.135 [2024-11-20T06:19:58.568Z] Total : 3267.63 12.76 0.00 0.00 39109.66 6359.42 46020.84 00:17:55.135 { 00:17:55.135 "results": [ 00:17:55.135 { 00:17:55.135 "job": "TLSTESTn1", 00:17:55.135 "core_mask": "0x4", 00:17:55.135 "workload": "verify", 00:17:55.135 "status": "finished", 00:17:55.135 "verify_range": { 00:17:55.135 "start": 0, 00:17:55.135 "length": 8192 00:17:55.135 }, 00:17:55.135 "queue_depth": 128, 00:17:55.135 "io_size": 4096, 00:17:55.135 "runtime": 10.01949, 00:17:55.135 "iops": 3267.6313864278522, 00:17:55.136 "mibps": 12.764185103233798, 00:17:55.136 "io_failed": 0, 00:17:55.136 "io_timeout": 0, 00:17:55.136 "avg_latency_us": 39109.658156609876, 00:17:55.136 "min_latency_us": 6359.419259259259, 00:17:55.136 "max_latency_us": 46020.83555555555 00:17:55.136 } 00:17:55.136 ], 00:17:55.136 "core_count": 1 00:17:55.136 } 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2519785 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2519785 ']' 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2519785 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2519785 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2519785' 00:17:55.136 killing process with pid 2519785 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2519785 00:17:55.136 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.136 00:17:55.136 Latency(us) 00:17:55.136 [2024-11-20T06:19:58.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.136 [2024-11-20T06:19:58.569Z] =================================================================================================================== 00:17:55.136 [2024-11-20T06:19:58.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2519785 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DsfkZqeVMP 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DsfkZqeVMP 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DsfkZqeVMP 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DsfkZqeVMP 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2521085 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2521085 /var/tmp/bdevperf.sock 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2521085 ']' 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.136 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.394 [2024-11-20 07:19:58.598897] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:55.394 [2024-11-20 07:19:58.598981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521085 ] 00:17:55.394 [2024-11-20 07:19:58.666056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.394 [2024-11-20 07:19:58.726247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.652 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:55.652 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:55.652 07:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DsfkZqeVMP 00:17:55.913 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:55.913 [2024-11-20 07:19:59.343712] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.172 [2024-11-20 07:19:59.350954] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:56.172 [2024-11-20 07:19:59.351126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec2c0 (107): Transport endpoint is not connected 00:17:56.172 [2024-11-20 07:19:59.352115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ec2c0 (9): Bad file descriptor 00:17:56.172 [2024-11-20 07:19:59.353116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:56.172 [2024-11-20 07:19:59.353135] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:56.172 [2024-11-20 07:19:59.353163] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:56.172 [2024-11-20 07:19:59.353182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:56.172 request: 00:17:56.172 { 00:17:56.172 "name": "TLSTEST", 00:17:56.172 "trtype": "tcp", 00:17:56.172 "traddr": "10.0.0.2", 00:17:56.172 "adrfam": "ipv4", 00:17:56.172 "trsvcid": "4420", 00:17:56.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.172 "prchk_reftag": false, 00:17:56.172 "prchk_guard": false, 00:17:56.172 "hdgst": false, 00:17:56.172 "ddgst": false, 00:17:56.172 "psk": "key0", 00:17:56.172 "allow_unrecognized_csi": false, 00:17:56.172 "method": "bdev_nvme_attach_controller", 00:17:56.172 "req_id": 1 00:17:56.172 } 00:17:56.172 Got JSON-RPC error response 00:17:56.172 response: 00:17:56.172 { 00:17:56.172 "code": -5, 00:17:56.172 "message": "Input/output error" 00:17:56.172 } 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2521085 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2521085 ']' 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2521085 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2521085 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2521085' 00:17:56.172 killing process with pid 2521085 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2521085 00:17:56.172 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.172 00:17:56.172 Latency(us) 00:17:56.172 [2024-11-20T06:19:59.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.172 [2024-11-20T06:19:59.605Z] =================================================================================================================== 00:17:56.172 [2024-11-20T06:19:59.605Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.172 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2521085 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Q6b20twa9 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Q6b20twa9 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Q6b20twa9 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8Q6b20twa9 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2521225 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2521225 /var/tmp/bdevperf.sock 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2521225 ']' 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:56.431 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.431 [2024-11-20 07:19:59.681731] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:56.431 [2024-11-20 07:19:59.681816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521225 ] 00:17:56.431 [2024-11-20 07:19:59.747153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.431 [2024-11-20 07:19:59.804750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.690 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.690 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:56.690 07:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8Q6b20twa9 00:17:56.948 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:57.206 [2024-11-20 07:20:00.446643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.206 [2024-11-20 07:20:00.453712] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.206 [2024-11-20 07:20:00.453744] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.206 [2024-11-20 07:20:00.453781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.206 [2024-11-20 07:20:00.453897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9272c0 (107): Transport endpoint is not connected 00:17:57.206 [2024-11-20 07:20:00.454886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9272c0 (9): Bad file descriptor 00:17:57.206 [2024-11-20 07:20:00.455901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:57.206 [2024-11-20 07:20:00.455929] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.207 [2024-11-20 07:20:00.455944] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:57.207 [2024-11-20 07:20:00.455985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:57.207 request: 00:17:57.207 { 00:17:57.207 "name": "TLSTEST", 00:17:57.207 "trtype": "tcp", 00:17:57.207 "traddr": "10.0.0.2", 00:17:57.207 "adrfam": "ipv4", 00:17:57.207 "trsvcid": "4420", 00:17:57.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.207 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:57.207 "prchk_reftag": false, 00:17:57.207 "prchk_guard": false, 00:17:57.207 "hdgst": false, 00:17:57.207 "ddgst": false, 00:17:57.207 "psk": "key0", 00:17:57.207 "allow_unrecognized_csi": false, 00:17:57.207 "method": "bdev_nvme_attach_controller", 00:17:57.207 "req_id": 1 00:17:57.207 } 00:17:57.207 Got JSON-RPC error response 00:17:57.207 response: 00:17:57.207 { 00:17:57.207 "code": -5, 00:17:57.207 "message": "Input/output error" 00:17:57.207 } 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2521225 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2521225 ']' 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2521225 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2521225 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2521225' 00:17:57.207 killing process with pid 2521225 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2521225 00:17:57.207 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.207 00:17:57.207 Latency(us) 00:17:57.207 [2024-11-20T06:20:00.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.207 [2024-11-20T06:20:00.640Z] =================================================================================================================== 00:17:57.207 [2024-11-20T06:20:00.640Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.207 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2521225 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Q6b20twa9 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Q6b20twa9 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Q6b20twa9 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8Q6b20twa9 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2521365 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2521365 /var/tmp/bdevperf.sock 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2521365 ']' 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:57.465 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.466 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:57.466 07:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.466 [2024-11-20 07:20:00.785566] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:57.466 [2024-11-20 07:20:00.785659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521365 ] 00:17:57.466 [2024-11-20 07:20:00.851371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.724 [2024-11-20 07:20:00.910051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:57.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:57.724 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8Q6b20twa9 00:17:57.982 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.240 [2024-11-20 07:20:01.595444] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.240 [2024-11-20 07:20:01.606685] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.240 [2024-11-20 07:20:01.606717] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.240 [2024-11-20 07:20:01.606769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.240 [2024-11-20 07:20:01.607558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93b2c0 (107): Transport endpoint is not connected 00:17:58.240 [2024-11-20 07:20:01.608548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93b2c0 (9): Bad file descriptor 00:17:58.240 [2024-11-20 07:20:01.609549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:58.240 [2024-11-20 07:20:01.609570] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.240 [2024-11-20 07:20:01.609591] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:58.240 [2024-11-20 07:20:01.609611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:58.240 request: 00:17:58.240 { 00:17:58.240 "name": "TLSTEST", 00:17:58.240 "trtype": "tcp", 00:17:58.240 "traddr": "10.0.0.2", 00:17:58.240 "adrfam": "ipv4", 00:17:58.240 "trsvcid": "4420", 00:17:58.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:58.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.240 "prchk_reftag": false, 00:17:58.240 "prchk_guard": false, 00:17:58.240 "hdgst": false, 00:17:58.240 "ddgst": false, 00:17:58.240 "psk": "key0", 00:17:58.240 "allow_unrecognized_csi": false, 00:17:58.240 "method": "bdev_nvme_attach_controller", 00:17:58.240 "req_id": 1 00:17:58.240 } 00:17:58.240 Got JSON-RPC error response 00:17:58.240 response: 00:17:58.240 { 00:17:58.240 "code": -5, 00:17:58.240 "message": "Input/output error" 00:17:58.240 } 00:17:58.240 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2521365 00:17:58.240 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2521365 ']' 00:17:58.240 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2521365 00:17:58.240 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:58.240 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:58.240 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2521365 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2521365' 00:17:58.498 killing process with pid 2521365 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2521365 00:17:58.498 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.498 00:17:58.498 Latency(us) 00:17:58.498 [2024-11-20T06:20:01.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.498 [2024-11-20T06:20:01.931Z] =================================================================================================================== 00:17:58.498 [2024-11-20T06:20:01.931Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2521365 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2521507 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.498 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2521507 /var/tmp/bdevperf.sock 00:17:58.499 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2521507 ']' 00:17:58.499 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.499 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:58.499 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.499 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:58.499 07:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.757 [2024-11-20 07:20:01.934715] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:17:58.757 [2024-11-20 07:20:01.934800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521507 ] 00:17:58.757 [2024-11-20 07:20:02.002861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.757 [2024-11-20 07:20:02.062576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.757 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:58.757 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:58.757 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:59.015 [2024-11-20 07:20:02.422637] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:59.015 [2024-11-20 07:20:02.422682] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:59.015 request: 00:17:59.015 { 00:17:59.015 "name": "key0", 00:17:59.015 "path": "", 00:17:59.015 "method": "keyring_file_add_key", 00:17:59.015 "req_id": 1 00:17:59.015 } 00:17:59.015 Got JSON-RPC error response 00:17:59.015 response: 00:17:59.015 { 00:17:59.015 "code": -1, 00:17:59.015 "message": "Operation not permitted" 00:17:59.015 } 00:17:59.015 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:59.273 [2024-11-20 07:20:02.687467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.273 [2024-11-20 07:20:02.687523] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:59.273 request: 00:17:59.273 { 00:17:59.273 "name": "TLSTEST", 00:17:59.273 "trtype": "tcp", 00:17:59.273 "traddr": "10.0.0.2", 00:17:59.273 "adrfam": "ipv4", 00:17:59.273 "trsvcid": "4420", 00:17:59.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.273 "prchk_reftag": false, 00:17:59.273 "prchk_guard": false, 00:17:59.273 "hdgst": false, 00:17:59.273 "ddgst": false, 00:17:59.273 "psk": "key0", 00:17:59.273 "allow_unrecognized_csi": false, 00:17:59.273 "method": "bdev_nvme_attach_controller", 00:17:59.273 "req_id": 1 00:17:59.273 } 00:17:59.273 Got JSON-RPC error response 00:17:59.273 response: 00:17:59.273 { 00:17:59.273 "code": -126, 00:17:59.273 "message": "Required key not available" 00:17:59.273 } 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2521507 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2521507 ']' 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2521507 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2521507 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2521507' 00:17:59.531 killing process with pid 2521507 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2521507 00:17:59.531 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.531 00:17:59.531 Latency(us) 00:17:59.531 [2024-11-20T06:20:02.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.531 [2024-11-20T06:20:02.964Z] =================================================================================================================== 00:17:59.531 [2024-11-20T06:20:02.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.531 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2521507 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2517849 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2517849 ']' 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2517849 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:59.790 07:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2517849 00:17:59.790 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:59.790 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:59.790 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2517849' 00:17:59.790 killing process with pid 2517849 00:17:59.790 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2517849 00:17:59.790 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2517849 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.yOPKBkQtAb 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.yOPKBkQtAb 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:00.048 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2521772 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2521772 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2521772 ']' 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.049 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.049 [2024-11-20 07:20:03.355086] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:00.049 [2024-11-20 07:20:03.355172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.049 [2024-11-20 07:20:03.432741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.308 [2024-11-20 07:20:03.492394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.308 [2024-11-20 07:20:03.492443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.308 [2024-11-20 07:20:03.492456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.308 [2024-11-20 07:20:03.492467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.308 [2024-11-20 07:20:03.492477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.308 [2024-11-20 07:20:03.493040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.yOPKBkQtAb 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yOPKBkQtAb 00:18:00.308 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.566 [2024-11-20 07:20:03.943618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.566 07:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:00.824 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.084 [2024-11-20 07:20:04.489051] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.084 [2024-11-20 07:20:04.489279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.084 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.711 malloc0 00:18:01.711 07:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.969 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:02.226 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOPKBkQtAb 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yOPKBkQtAb 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2522072 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2522072 /var/tmp/bdevperf.sock 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2522072 ']' 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.484 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.484 [2024-11-20 07:20:05.748156] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:02.484 [2024-11-20 07:20:05.748236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522072 ] 00:18:02.484 [2024-11-20 07:20:05.813499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.484 [2024-11-20 07:20:05.871001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.742 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.742 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:02.742 07:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:03.000 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.258 [2024-11-20 07:20:06.492445] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.258 TLSTESTn1 00:18:03.258 07:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.517 Running I/O for 10 seconds... 00:18:05.382 3456.00 IOPS, 13.50 MiB/s [2024-11-20T06:20:09.748Z] 3535.50 IOPS, 13.81 MiB/s [2024-11-20T06:20:11.119Z] 3572.67 IOPS, 13.96 MiB/s [2024-11-20T06:20:12.060Z] 3590.25 IOPS, 14.02 MiB/s [2024-11-20T06:20:12.999Z] 3604.00 IOPS, 14.08 MiB/s [2024-11-20T06:20:13.927Z] 3609.83 IOPS, 14.10 MiB/s [2024-11-20T06:20:14.859Z] 3601.71 IOPS, 14.07 MiB/s [2024-11-20T06:20:15.791Z] 3611.50 IOPS, 14.11 MiB/s [2024-11-20T06:20:16.723Z] 3608.67 IOPS, 14.10 MiB/s [2024-11-20T06:20:16.981Z] 3610.80 IOPS, 14.10 MiB/s 00:18:13.548 Latency(us) 00:18:13.548 [2024-11-20T06:20:16.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.548 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.548 Verification LBA range: start 0x0 length 0x2000 00:18:13.548 TLSTESTn1 : 10.02 3616.78 14.13 0.00 0.00 35332.50 6747.78 32816.55 00:18:13.548 [2024-11-20T06:20:16.981Z] =================================================================================================================== 00:18:13.548 [2024-11-20T06:20:16.981Z] Total : 3616.78 14.13 0.00 0.00 35332.50 6747.78 32816.55 00:18:13.548 { 00:18:13.548 "results": [ 00:18:13.548 { 00:18:13.548 "job": "TLSTESTn1", 00:18:13.548 "core_mask": "0x4", 00:18:13.548 "workload": "verify", 00:18:13.548 "status": "finished", 00:18:13.548 "verify_range": { 00:18:13.548 "start": 0, 00:18:13.548 "length": 8192 00:18:13.548 }, 00:18:13.548 "queue_depth": 128, 00:18:13.548 "io_size": 4096, 00:18:13.548 "runtime": 10.018853, 00:18:13.548 "iops": 3616.7812822485766, 00:18:13.548 "mibps": 14.128051883783503, 00:18:13.548 "io_failed": 0, 00:18:13.548 "io_timeout": 0, 00:18:13.548 "avg_latency_us": 35332.50296582485, 00:18:13.548 "min_latency_us": 6747.780740740741, 00:18:13.548 "max_latency_us": 32816.54518518518 00:18:13.548 } 00:18:13.548 ], 00:18:13.548 "core_count": 1 00:18:13.548 } 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2522072 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2522072 ']' 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2522072 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2522072 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2522072' 00:18:13.548 killing process with pid 2522072 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2522072 00:18:13.548 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.548 00:18:13.548 Latency(us) 00:18:13.548 [2024-11-20T06:20:16.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.548 [2024-11-20T06:20:16.981Z] =================================================================================================================== 00:18:13.548 [2024-11-20T06:20:16.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.548 07:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2522072 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.yOPKBkQtAb 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOPKBkQtAb 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOPKBkQtAb 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yOPKBkQtAb 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yOPKBkQtAb 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2523391 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2523391 /var/tmp/bdevperf.sock 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2523391 ']' 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.806 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.806 [2024-11-20 07:20:17.061884] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:13.806 [2024-11-20 07:20:17.061969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523391 ] 00:18:13.806 [2024-11-20 07:20:17.127891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.806 [2024-11-20 07:20:17.186464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.063 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.063 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:14.063 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:14.320 [2024-11-20 07:20:17.545405] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yOPKBkQtAb': 0100666 00:18:14.320 [2024-11-20 07:20:17.545450] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:14.320 request: 00:18:14.320 { 00:18:14.320 "name": "key0", 00:18:14.320 "path": "/tmp/tmp.yOPKBkQtAb", 00:18:14.320 "method": "keyring_file_add_key", 00:18:14.320 "req_id": 1 00:18:14.320 } 00:18:14.320 Got JSON-RPC error response 00:18:14.320 response: 00:18:14.320 { 00:18:14.320 "code": -1, 00:18:14.320 "message": "Operation not permitted" 00:18:14.320 } 00:18:14.320 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.577 [2024-11-20 07:20:17.870346] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.578 [2024-11-20 07:20:17.870409] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:14.578 request: 00:18:14.578 { 00:18:14.578 "name": "TLSTEST", 00:18:14.578 "trtype": "tcp", 00:18:14.578 "traddr": "10.0.0.2", 00:18:14.578 "adrfam": "ipv4", 00:18:14.578 "trsvcid": "4420", 00:18:14.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.578 "prchk_reftag": false, 00:18:14.578 "prchk_guard": false, 00:18:14.578 "hdgst": false, 00:18:14.578 "ddgst": false, 00:18:14.578 "psk": "key0", 00:18:14.578 "allow_unrecognized_csi": false, 00:18:14.578 "method": "bdev_nvme_attach_controller", 00:18:14.578 "req_id": 1 00:18:14.578 } 00:18:14.578 Got JSON-RPC error response 00:18:14.578 response: 00:18:14.578 { 00:18:14.578 "code": -126, 00:18:14.578 "message": "Required key not available" 00:18:14.578 } 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2523391 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2523391 ']' 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2523391 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2523391 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2523391' 00:18:14.578 killing process with pid 2523391 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2523391 00:18:14.578 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.578 00:18:14.578 Latency(us) 00:18:14.578 [2024-11-20T06:20:18.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.578 [2024-11-20T06:20:18.011Z] =================================================================================================================== 00:18:14.578 [2024-11-20T06:20:18.011Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.578 07:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2523391 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2521772 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2521772 ']' 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2521772 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2521772 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2521772' 00:18:14.835 killing process with pid 2521772 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2521772 00:18:14.835 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2521772 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2523536 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2523536 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2523536 ']' 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.093 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.093 [2024-11-20 07:20:18.484866] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:15.093 [2024-11-20 07:20:18.484970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.351 [2024-11-20 07:20:18.555548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.351 [2024-11-20 07:20:18.614360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.351 [2024-11-20 07:20:18.614414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.351 [2024-11-20 07:20:18.614443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.351 [2024-11-20 07:20:18.614454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.351 [2024-11-20 07:20:18.614463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.351 [2024-11-20 07:20:18.615006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.yOPKBkQtAb 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.yOPKBkQtAb 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.yOPKBkQtAb 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yOPKBkQtAb 00:18:15.351 07:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.608 [2024-11-20 07:20:18.993159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.608 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.866 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.123 [2024-11-20 07:20:19.522578] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.123 [2024-11-20 07:20:19.522819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.123 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:16.380 malloc0 00:18:16.638 07:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.896 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:17.154 [2024-11-20 07:20:20.343635] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yOPKBkQtAb': 0100666 00:18:17.154 [2024-11-20 07:20:20.343690] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:17.154 request: 00:18:17.154 { 00:18:17.154 "name": "key0", 00:18:17.154 "path": "/tmp/tmp.yOPKBkQtAb", 00:18:17.154 "method": "keyring_file_add_key", 00:18:17.154 "req_id": 1 00:18:17.154 } 00:18:17.154 Got JSON-RPC error response 00:18:17.154 response: 00:18:17.154 { 00:18:17.154 "code": -1, 00:18:17.154 "message": "Operation not permitted" 00:18:17.154 } 00:18:17.154 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.412 [2024-11-20 07:20:20.644453] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:17.412 [2024-11-20 07:20:20.644529] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:17.412 request: 00:18:17.412 { 00:18:17.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.412 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.412 "psk": "key0", 00:18:17.412 "method": "nvmf_subsystem_add_host", 00:18:17.412 "req_id": 1 00:18:17.412 } 00:18:17.412 Got JSON-RPC error response 00:18:17.412 response: 00:18:17.412 { 00:18:17.412 "code": -32603, 00:18:17.412 "message": "Internal error" 00:18:17.412 } 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2523536 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2523536 ']' 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2523536 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2523536 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2523536' 00:18:17.412 killing process with pid 2523536 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2523536 00:18:17.412 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2523536 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.yOPKBkQtAb 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2523840 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2523840 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2523840 ']' 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.670 07:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.670 [2024-11-20 07:20:20.992369] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:17.670 [2024-11-20 07:20:20.992476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.670 [2024-11-20 07:20:21.061622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.928 [2024-11-20 07:20:21.113048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.928 [2024-11-20 07:20:21.113095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.928 [2024-11-20 07:20:21.113131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.928 [2024-11-20 07:20:21.113142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.928 [2024-11-20 07:20:21.113151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.928 [2024-11-20 07:20:21.113743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.yOPKBkQtAb 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yOPKBkQtAb 00:18:17.928 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:18.186 [2024-11-20 07:20:21.518938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.186 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.445 07:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.702 [2024-11-20 07:20:22.056404] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.702 [2024-11-20 07:20:22.056661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.702 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:18.961 malloc0 00:18:18.961 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.220 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:19.477 07:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2524123 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2524123 /var/tmp/bdevperf.sock 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2524123 ']' 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:19.735 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.993 [2024-11-20 07:20:23.185469] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:19.993 [2024-11-20 07:20:23.185547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524123 ] 00:18:19.993 [2024-11-20 07:20:23.250745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.993 [2024-11-20 07:20:23.308384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.993 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.993 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:19.993 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:20.559 07:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.559 [2024-11-20 07:20:23.952565] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.817 TLSTESTn1 00:18:20.817 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:21.075 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:21.075 "subsystems": [ 00:18:21.075 { 00:18:21.075 "subsystem": "keyring", 00:18:21.075 "config": [ 00:18:21.075 { 00:18:21.075 "method": "keyring_file_add_key", 00:18:21.075 "params": { 00:18:21.075 "name": "key0", 00:18:21.075 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:21.075 } 00:18:21.075 } 00:18:21.075 ] 00:18:21.075 }, 00:18:21.075 { 00:18:21.075 "subsystem": "iobuf", 00:18:21.075 "config": [ 00:18:21.075 { 00:18:21.075 "method": "iobuf_set_options", 00:18:21.075 "params": { 00:18:21.075 "small_pool_count": 8192, 00:18:21.075 "large_pool_count": 1024, 00:18:21.075 "small_bufsize": 8192, 00:18:21.075 "large_bufsize": 135168, 00:18:21.075 "enable_numa": false 00:18:21.075 } 00:18:21.075 } 00:18:21.075 ] 00:18:21.075 }, 00:18:21.075 { 00:18:21.075 "subsystem": "sock", 00:18:21.075 "config": [ 00:18:21.075 { 00:18:21.075 "method": "sock_set_default_impl", 00:18:21.075 "params": { 00:18:21.075 "impl_name": "posix" 00:18:21.075 } 00:18:21.075 }, 00:18:21.075 { 00:18:21.075 "method": "sock_impl_set_options", 00:18:21.075 "params": { 00:18:21.075 "impl_name": "ssl", 00:18:21.075 "recv_buf_size": 4096, 00:18:21.075 "send_buf_size": 4096, 00:18:21.075 "enable_recv_pipe": true, 00:18:21.075 "enable_quickack": false, 00:18:21.075 "enable_placement_id": 0, 00:18:21.075 "enable_zerocopy_send_server": true, 00:18:21.075 "enable_zerocopy_send_client": false, 00:18:21.075 "zerocopy_threshold": 0, 00:18:21.075 "tls_version": 0, 00:18:21.075 "enable_ktls": false 00:18:21.075 } 00:18:21.075 }, 00:18:21.075 { 00:18:21.075 "method": "sock_impl_set_options", 00:18:21.075 "params": { 00:18:21.075 "impl_name": "posix", 00:18:21.075 "recv_buf_size": 2097152, 00:18:21.075 "send_buf_size": 2097152, 00:18:21.075 "enable_recv_pipe": true, 00:18:21.075 "enable_quickack": false, 00:18:21.075 "enable_placement_id": 0, 00:18:21.075 "enable_zerocopy_send_server": true, 00:18:21.075 "enable_zerocopy_send_client": false, 00:18:21.075 "zerocopy_threshold": 0, 00:18:21.075 "tls_version": 0, 00:18:21.076 "enable_ktls": false 00:18:21.076 } 00:18:21.076 } 00:18:21.076 ] 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "subsystem": "vmd", 00:18:21.076 "config": [] 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "subsystem": "accel", 00:18:21.076 "config": [ 00:18:21.076 { 00:18:21.076 "method": "accel_set_options", 00:18:21.076 "params": { 00:18:21.076 "small_cache_size": 128, 00:18:21.076 "large_cache_size": 16, 00:18:21.076 "task_count": 2048, 00:18:21.076 "sequence_count": 2048, 00:18:21.076 "buf_count": 2048 00:18:21.076 } 00:18:21.076 } 00:18:21.076 ] 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "subsystem": "bdev", 00:18:21.076 "config": [ 00:18:21.076 { 00:18:21.076 "method": "bdev_set_options", 00:18:21.076 "params": { 00:18:21.076 "bdev_io_pool_size": 65535, 00:18:21.076 "bdev_io_cache_size": 256, 00:18:21.076 "bdev_auto_examine": true, 00:18:21.076 "iobuf_small_cache_size": 128, 00:18:21.076 "iobuf_large_cache_size": 16 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "bdev_raid_set_options", 00:18:21.076 "params": { 00:18:21.076 "process_window_size_kb": 1024, 00:18:21.076 "process_max_bandwidth_mb_sec": 0 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "bdev_iscsi_set_options", 00:18:21.076 "params": { 00:18:21.076 "timeout_sec": 30 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "bdev_nvme_set_options", 00:18:21.076 "params": { 00:18:21.076 "action_on_timeout": "none", 00:18:21.076 "timeout_us": 0, 00:18:21.076 "timeout_admin_us": 0, 00:18:21.076 "keep_alive_timeout_ms": 10000, 00:18:21.076 "arbitration_burst": 0, 00:18:21.076 "low_priority_weight": 0, 00:18:21.076 "medium_priority_weight": 0, 00:18:21.076 "high_priority_weight": 0, 00:18:21.076 "nvme_adminq_poll_period_us": 10000, 00:18:21.076 "nvme_ioq_poll_period_us": 0, 00:18:21.076 "io_queue_requests": 0, 00:18:21.076 "delay_cmd_submit": true, 00:18:21.076 "transport_retry_count": 4, 00:18:21.076 "bdev_retry_count": 3, 00:18:21.076 "transport_ack_timeout": 0, 00:18:21.076 "ctrlr_loss_timeout_sec": 0, 00:18:21.076 "reconnect_delay_sec": 0, 00:18:21.076 "fast_io_fail_timeout_sec": 0, 00:18:21.076 "disable_auto_failback": false, 00:18:21.076 "generate_uuids": false, 00:18:21.076 "transport_tos": 0, 00:18:21.076 "nvme_error_stat": false, 00:18:21.076 "rdma_srq_size": 0, 00:18:21.076 "io_path_stat": false, 00:18:21.076 "allow_accel_sequence": false, 00:18:21.076 "rdma_max_cq_size": 0, 00:18:21.076 "rdma_cm_event_timeout_ms": 0, 00:18:21.076 "dhchap_digests": [ 00:18:21.076 "sha256", 00:18:21.076 "sha384", 00:18:21.076 "sha512" 00:18:21.076 ], 00:18:21.076 "dhchap_dhgroups": [ 00:18:21.076 "null", 00:18:21.076 "ffdhe2048", 00:18:21.076 "ffdhe3072", 00:18:21.076 "ffdhe4096", 00:18:21.076 "ffdhe6144", 00:18:21.076 "ffdhe8192" 00:18:21.076 ] 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "bdev_nvme_set_hotplug", 00:18:21.076 "params": { 00:18:21.076 "period_us": 100000, 00:18:21.076 "enable": false 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "bdev_malloc_create", 00:18:21.076 "params": { 00:18:21.076 "name": "malloc0", 00:18:21.076 "num_blocks": 8192, 00:18:21.076 "block_size": 4096, 00:18:21.076 "physical_block_size": 4096, 00:18:21.076 "uuid": "952713cd-6af6-4a41-aacb-695c1b1c169e", 00:18:21.076 "optimal_io_boundary": 0, 00:18:21.076 "md_size": 0, 00:18:21.076 "dif_type": 0, 00:18:21.076 "dif_is_head_of_md": false, 00:18:21.076 "dif_pi_format": 0 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "bdev_wait_for_examine" 00:18:21.076 } 00:18:21.076 ] 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "subsystem": "nbd", 00:18:21.076 "config": [] 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "subsystem": "scheduler", 00:18:21.076 "config": [ 00:18:21.076 { 00:18:21.076 "method": "framework_set_scheduler", 00:18:21.076 "params": { 00:18:21.076 "name": "static" 00:18:21.076 } 00:18:21.076 } 00:18:21.076 ] 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "subsystem": "nvmf", 00:18:21.076 "config": [ 00:18:21.076 { 00:18:21.076 "method": "nvmf_set_config", 00:18:21.076 "params": { 00:18:21.076 "discovery_filter": "match_any", 00:18:21.076 "admin_cmd_passthru": { 00:18:21.076 "identify_ctrlr": false 00:18:21.076 }, 00:18:21.076 "dhchap_digests": [ 00:18:21.076 "sha256", 00:18:21.076 "sha384", 00:18:21.076 "sha512" 00:18:21.076 ], 00:18:21.076 "dhchap_dhgroups": [ 00:18:21.076 "null", 00:18:21.076 "ffdhe2048", 00:18:21.076 "ffdhe3072", 00:18:21.076 "ffdhe4096", 00:18:21.076 "ffdhe6144", 00:18:21.076 "ffdhe8192" 00:18:21.076 ] 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "nvmf_set_max_subsystems", 00:18:21.076 "params": { 00:18:21.076 "max_subsystems": 1024 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "nvmf_set_crdt", 00:18:21.076 "params": { 00:18:21.076 "crdt1": 0, 00:18:21.076 "crdt2": 0, 00:18:21.076 "crdt3": 0 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "nvmf_create_transport", 00:18:21.076 "params": { 00:18:21.076 "trtype": "TCP", 00:18:21.076 "max_queue_depth": 128, 00:18:21.076 "max_io_qpairs_per_ctrlr": 127, 00:18:21.076 "in_capsule_data_size": 4096, 00:18:21.076 "max_io_size": 131072, 00:18:21.076 "io_unit_size": 131072, 00:18:21.076 "max_aq_depth": 128, 00:18:21.076 "num_shared_buffers": 511, 00:18:21.076 "buf_cache_size": 4294967295, 00:18:21.076 "dif_insert_or_strip": false, 00:18:21.076 "zcopy": false, 00:18:21.076 "c2h_success": false, 00:18:21.076 "sock_priority": 0, 00:18:21.076 "abort_timeout_sec": 1, 00:18:21.076 "ack_timeout": 0, 00:18:21.076 "data_wr_pool_size": 0 00:18:21.076 } 00:18:21.076 }, 00:18:21.076 { 00:18:21.076 "method": "nvmf_create_subsystem", 00:18:21.076 "params": { 00:18:21.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.077 "allow_any_host": false, 00:18:21.077 "serial_number": "SPDK00000000000001", 00:18:21.077 "model_number": "SPDK bdev Controller", 00:18:21.077 "max_namespaces": 10, 00:18:21.077 "min_cntlid": 1, 00:18:21.077 "max_cntlid": 65519, 00:18:21.077 "ana_reporting": false 00:18:21.077 } 00:18:21.077 }, 00:18:21.077 { 00:18:21.077 "method": "nvmf_subsystem_add_host", 00:18:21.077 "params": { 00:18:21.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.077 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.077 "psk": "key0" 00:18:21.077 } 00:18:21.077 }, 00:18:21.077 { 00:18:21.077 "method": "nvmf_subsystem_add_ns", 00:18:21.077 "params": { 00:18:21.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.077 "namespace": { 00:18:21.077 "nsid": 1, 00:18:21.077 "bdev_name": "malloc0", 00:18:21.077 "nguid": "952713CD6AF64A41AACB695C1B1C169E", 00:18:21.077 "uuid": "952713cd-6af6-4a41-aacb-695c1b1c169e", 00:18:21.077 "no_auto_visible": false 00:18:21.077 } 00:18:21.077 } 00:18:21.077 }, 00:18:21.077 { 00:18:21.077 "method": "nvmf_subsystem_add_listener", 00:18:21.077 "params": { 00:18:21.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.077 "listen_address": { 00:18:21.077 "trtype": "TCP", 00:18:21.077 "adrfam": "IPv4", 00:18:21.077 "traddr": "10.0.0.2", 00:18:21.077 "trsvcid": "4420" 00:18:21.077 }, 00:18:21.077 "secure_channel": true 00:18:21.077 } 00:18:21.077 } 00:18:21.077 ] 00:18:21.077 } 00:18:21.077 ] 00:18:21.077 }' 00:18:21.077 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:21.335 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:21.335 "subsystems": [ 00:18:21.335 { 00:18:21.335 "subsystem": "keyring", 00:18:21.335 "config": [ 00:18:21.335 { 00:18:21.335 "method": "keyring_file_add_key", 00:18:21.335 "params": { 00:18:21.335 "name": "key0", 00:18:21.335 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:21.335 } 00:18:21.335 } 00:18:21.335 ] 00:18:21.335 }, 00:18:21.335 { 00:18:21.335 "subsystem": "iobuf", 00:18:21.335 "config": [ 00:18:21.335 { 00:18:21.335 "method": "iobuf_set_options", 00:18:21.335 "params": { 00:18:21.335 "small_pool_count": 8192, 00:18:21.335 "large_pool_count": 1024, 00:18:21.335 "small_bufsize": 8192, 00:18:21.335 "large_bufsize": 135168, 00:18:21.335 "enable_numa": false 00:18:21.335 } 00:18:21.335 } 00:18:21.335 ] 00:18:21.335 }, 00:18:21.335 { 00:18:21.335 "subsystem": "sock", 00:18:21.335 "config": [ 00:18:21.335 { 00:18:21.335 "method": "sock_set_default_impl", 00:18:21.335 "params": { 00:18:21.335 "impl_name": "posix" 00:18:21.335 } 00:18:21.335 }, 00:18:21.335 { 00:18:21.335 "method": "sock_impl_set_options", 00:18:21.335 "params": { 00:18:21.335 "impl_name": "ssl", 00:18:21.335 "recv_buf_size": 4096, 00:18:21.335 "send_buf_size": 4096, 00:18:21.335 "enable_recv_pipe": true, 00:18:21.335 "enable_quickack": false, 00:18:21.335 "enable_placement_id": 0, 00:18:21.335 "enable_zerocopy_send_server": true, 00:18:21.335 "enable_zerocopy_send_client": false, 00:18:21.335 "zerocopy_threshold": 0, 00:18:21.335 "tls_version": 0, 00:18:21.335 "enable_ktls": false 00:18:21.335 } 00:18:21.335 }, 00:18:21.335 { 00:18:21.335 "method": "sock_impl_set_options", 00:18:21.335 "params": { 00:18:21.335 "impl_name": "posix", 00:18:21.335 "recv_buf_size": 2097152, 00:18:21.335 "send_buf_size": 2097152, 00:18:21.335 "enable_recv_pipe": true, 00:18:21.335 "enable_quickack": false, 00:18:21.335 "enable_placement_id": 0, 00:18:21.335 "enable_zerocopy_send_server": true, 00:18:21.335 "enable_zerocopy_send_client": false, 00:18:21.335 "zerocopy_threshold": 0, 00:18:21.335 "tls_version": 0, 00:18:21.335 "enable_ktls": false 00:18:21.335 } 00:18:21.335 } 00:18:21.335 ] 00:18:21.335 }, 00:18:21.335 { 00:18:21.335 "subsystem": "vmd", 00:18:21.335 "config": [] 00:18:21.335 }, 00:18:21.335 { 00:18:21.335 "subsystem": "accel", 00:18:21.335 "config": [ 00:18:21.335 { 00:18:21.335 "method": "accel_set_options", 00:18:21.335 "params": { 00:18:21.335 "small_cache_size": 128, 00:18:21.335 "large_cache_size": 16, 00:18:21.335 "task_count": 2048, 00:18:21.335 "sequence_count": 2048, 00:18:21.335 "buf_count": 2048 00:18:21.335 } 00:18:21.335 } 00:18:21.336 ] 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "subsystem": "bdev", 00:18:21.336 "config": [ 00:18:21.336 { 00:18:21.336 "method": "bdev_set_options", 00:18:21.336 "params": { 00:18:21.336 "bdev_io_pool_size": 65535, 00:18:21.336 "bdev_io_cache_size": 256, 00:18:21.336 "bdev_auto_examine": true, 00:18:21.336 "iobuf_small_cache_size": 128, 00:18:21.336 "iobuf_large_cache_size": 16 00:18:21.336 } 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "method": "bdev_raid_set_options", 00:18:21.336 "params": { 00:18:21.336 "process_window_size_kb": 1024, 00:18:21.336 "process_max_bandwidth_mb_sec": 0 00:18:21.336 } 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "method": "bdev_iscsi_set_options", 00:18:21.336 "params": { 00:18:21.336 "timeout_sec": 30 00:18:21.336 } 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "method": "bdev_nvme_set_options", 00:18:21.336 "params": { 00:18:21.336 "action_on_timeout": "none", 00:18:21.336 "timeout_us": 0, 00:18:21.336 "timeout_admin_us": 0, 00:18:21.336 "keep_alive_timeout_ms": 10000, 00:18:21.336 "arbitration_burst": 0, 00:18:21.336 "low_priority_weight": 0, 00:18:21.336 "medium_priority_weight": 0, 00:18:21.336 "high_priority_weight": 0, 00:18:21.336 "nvme_adminq_poll_period_us": 10000, 00:18:21.336 "nvme_ioq_poll_period_us": 0, 00:18:21.336 "io_queue_requests": 512, 00:18:21.336 "delay_cmd_submit": true, 00:18:21.336 "transport_retry_count": 4, 00:18:21.336 "bdev_retry_count": 3, 00:18:21.336 "transport_ack_timeout": 0, 00:18:21.336 "ctrlr_loss_timeout_sec": 0, 00:18:21.336 "reconnect_delay_sec": 0, 00:18:21.336 "fast_io_fail_timeout_sec": 0, 00:18:21.336 "disable_auto_failback": false, 00:18:21.336 "generate_uuids": false, 00:18:21.336 "transport_tos": 0, 00:18:21.336 "nvme_error_stat": false, 00:18:21.336 "rdma_srq_size": 0, 00:18:21.336 "io_path_stat": false, 00:18:21.336 "allow_accel_sequence": false, 00:18:21.336 "rdma_max_cq_size": 0, 00:18:21.336 "rdma_cm_event_timeout_ms": 0, 00:18:21.336 "dhchap_digests": [ 00:18:21.336 "sha256", 00:18:21.336 "sha384", 00:18:21.336 "sha512" 00:18:21.336 ], 00:18:21.336 "dhchap_dhgroups": [ 00:18:21.336 "null", 00:18:21.336 "ffdhe2048", 00:18:21.336 "ffdhe3072", 00:18:21.336 "ffdhe4096", 00:18:21.336 "ffdhe6144", 00:18:21.336 "ffdhe8192" 00:18:21.336 ] 00:18:21.336 } 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "method": "bdev_nvme_attach_controller", 00:18:21.336 "params": { 00:18:21.336 "name": "TLSTEST", 00:18:21.336 "trtype": "TCP", 00:18:21.336 "adrfam": "IPv4", 00:18:21.336 "traddr": "10.0.0.2", 00:18:21.336 "trsvcid": "4420", 00:18:21.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.336 "prchk_reftag": false, 00:18:21.336 "prchk_guard": false, 00:18:21.336 "ctrlr_loss_timeout_sec": 0, 00:18:21.336 "reconnect_delay_sec": 0, 00:18:21.336 "fast_io_fail_timeout_sec": 0, 00:18:21.336 "psk": "key0", 00:18:21.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.336 "hdgst": false, 00:18:21.336 "ddgst": false, 00:18:21.336 "multipath": "multipath" 00:18:21.336 } 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "method": "bdev_nvme_set_hotplug", 00:18:21.336 "params": { 00:18:21.336 "period_us": 100000, 00:18:21.336 "enable": false 00:18:21.336 } 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "method": "bdev_wait_for_examine" 00:18:21.336 } 00:18:21.336 ] 00:18:21.336 }, 00:18:21.336 { 00:18:21.336 "subsystem": "nbd", 00:18:21.336 "config": [] 00:18:21.336 } 00:18:21.336 ] 00:18:21.336 }' 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2524123 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2524123 ']' 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2524123 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2524123 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2524123' 00:18:21.336 killing process with pid 2524123 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2524123 00:18:21.336 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.336 00:18:21.336 Latency(us) 00:18:21.336 [2024-11-20T06:20:24.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.336 [2024-11-20T06:20:24.769Z] =================================================================================================================== 00:18:21.336 [2024-11-20T06:20:24.769Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.336 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2524123 00:18:21.594 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2523840 00:18:21.594 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2523840 ']' 00:18:21.594 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2523840 00:18:21.594 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:21.594 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:21.594 07:20:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2523840 00:18:21.594 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:21.594 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:21.594 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2523840' 00:18:21.594 killing process with pid 2523840 00:18:21.594 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2523840 00:18:21.594 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2523840 00:18:21.853 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:21.853 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.853 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:21.853 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:21.853 "subsystems": [ 00:18:21.853 { 00:18:21.853 "subsystem": "keyring", 00:18:21.853 "config": [ 00:18:21.853 { 00:18:21.853 "method": "keyring_file_add_key", 00:18:21.853 "params": { 00:18:21.853 "name": "key0", 00:18:21.853 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:21.853 } 00:18:21.853 } 00:18:21.853 ] 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "subsystem": "iobuf", 00:18:21.853 "config": [ 00:18:21.853 { 00:18:21.853 "method": "iobuf_set_options", 00:18:21.853 "params": { 00:18:21.853 "small_pool_count": 8192, 00:18:21.853 "large_pool_count": 1024, 00:18:21.853 "small_bufsize": 8192, 00:18:21.853 "large_bufsize": 135168, 00:18:21.853 "enable_numa": false 00:18:21.853 } 00:18:21.853 } 00:18:21.853 ] 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "subsystem": "sock", 00:18:21.853 "config": [ 00:18:21.853 { 00:18:21.853 "method": "sock_set_default_impl", 00:18:21.853 "params": { 00:18:21.853 "impl_name": "posix" 00:18:21.853 } 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "method": "sock_impl_set_options", 00:18:21.853 "params": { 00:18:21.853 "impl_name": "ssl", 00:18:21.853 "recv_buf_size": 4096, 00:18:21.853 "send_buf_size": 4096, 00:18:21.853 "enable_recv_pipe": true, 00:18:21.853 "enable_quickack": false, 00:18:21.853 "enable_placement_id": 0, 00:18:21.853 "enable_zerocopy_send_server": true, 00:18:21.853 "enable_zerocopy_send_client": false, 00:18:21.853 "zerocopy_threshold": 0, 00:18:21.853 "tls_version": 0, 00:18:21.853 "enable_ktls": false 00:18:21.853 } 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "method": "sock_impl_set_options", 00:18:21.853 "params": { 00:18:21.853 "impl_name": "posix", 00:18:21.853 "recv_buf_size": 2097152, 00:18:21.853 "send_buf_size": 2097152, 00:18:21.853 "enable_recv_pipe": true, 00:18:21.853 "enable_quickack": false, 00:18:21.853 "enable_placement_id": 0, 00:18:21.853 "enable_zerocopy_send_server": true, 00:18:21.853 "enable_zerocopy_send_client": false, 00:18:21.853 "zerocopy_threshold": 0, 00:18:21.853 "tls_version": 0, 00:18:21.853 "enable_ktls": false 00:18:21.853 } 00:18:21.853 } 00:18:21.853 ] 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "subsystem": "vmd", 00:18:21.853 "config": [] 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "subsystem": "accel", 00:18:21.853 "config": [ 00:18:21.853 { 00:18:21.853 "method": "accel_set_options", 00:18:21.853 "params": { 00:18:21.853 "small_cache_size": 128, 00:18:21.853 "large_cache_size": 16, 00:18:21.853 "task_count": 2048, 00:18:21.853 "sequence_count": 2048, 00:18:21.853 "buf_count": 2048 00:18:21.853 } 00:18:21.853 } 00:18:21.853 ] 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "subsystem": "bdev", 00:18:21.853 "config": [ 00:18:21.853 { 00:18:21.853 "method": "bdev_set_options", 00:18:21.853 "params": { 00:18:21.853 "bdev_io_pool_size": 65535, 00:18:21.853 "bdev_io_cache_size": 256, 00:18:21.853 "bdev_auto_examine": true, 00:18:21.853 "iobuf_small_cache_size": 128, 00:18:21.853 "iobuf_large_cache_size": 16 00:18:21.853 } 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "method": "bdev_raid_set_options", 00:18:21.853 "params": { 00:18:21.853 "process_window_size_kb": 1024, 00:18:21.853 "process_max_bandwidth_mb_sec": 0 00:18:21.853 } 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "method": "bdev_iscsi_set_options", 00:18:21.853 "params": { 00:18:21.853 "timeout_sec": 30 00:18:21.853 } 00:18:21.853 }, 00:18:21.853 { 00:18:21.853 "method": "bdev_nvme_set_options", 00:18:21.853 "params": { 00:18:21.853 "action_on_timeout": "none", 00:18:21.853 "timeout_us": 0, 00:18:21.853 "timeout_admin_us": 0, 00:18:21.853 "keep_alive_timeout_ms": 10000, 00:18:21.853 "arbitration_burst": 0, 00:18:21.853 "low_priority_weight": 0, 00:18:21.853 "medium_priority_weight": 0, 00:18:21.853 "high_priority_weight": 0, 00:18:21.853 "nvme_adminq_poll_period_us": 10000, 00:18:21.853 "nvme_ioq_poll_period_us": 0, 00:18:21.853 "io_queue_requests": 0, 00:18:21.853 "delay_cmd_submit": true, 00:18:21.853 "transport_retry_count": 4, 00:18:21.853 "bdev_retry_count": 3, 00:18:21.853 "transport_ack_timeout": 0, 00:18:21.853 "ctrlr_loss_timeout_sec": 0, 00:18:21.853 "reconnect_delay_sec": 0, 00:18:21.853 "fast_io_fail_timeout_sec": 0, 00:18:21.853 "disable_auto_failback": false, 00:18:21.853 "generate_uuids": false, 00:18:21.853 "transport_tos": 0, 00:18:21.853 "nvme_error_stat": false, 00:18:21.853 "rdma_srq_size": 0, 00:18:21.853 "io_path_stat": false, 00:18:21.853 "allow_accel_sequence": false, 00:18:21.853 "rdma_max_cq_size": 0, 00:18:21.853 "rdma_cm_event_timeout_ms": 0, 00:18:21.853 "dhchap_digests": [ 00:18:21.853 "sha256", 00:18:21.853 "sha384", 00:18:21.853 "sha512" 00:18:21.853 ], 00:18:21.853 "dhchap_dhgroups": [ 00:18:21.853 "null", 00:18:21.854 "ffdhe2048", 00:18:21.854 "ffdhe3072", 00:18:21.854 "ffdhe4096", 00:18:21.854 "ffdhe6144", 00:18:21.854 "ffdhe8192" 00:18:21.854 ] 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "bdev_nvme_set_hotplug", 00:18:21.854 "params": { 00:18:21.854 "period_us": 100000, 00:18:21.854 "enable": false 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "bdev_malloc_create", 00:18:21.854 "params": { 00:18:21.854 "name": "malloc0", 00:18:21.854 "num_blocks": 8192, 00:18:21.854 "block_size": 4096, 00:18:21.854 "physical_block_size": 4096, 00:18:21.854 "uuid": "952713cd-6af6-4a41-aacb-695c1b1c169e", 00:18:21.854 "optimal_io_boundary": 0, 00:18:21.854 "md_size": 0, 00:18:21.854 "dif_type": 0, 00:18:21.854 "dif_is_head_of_md": false, 00:18:21.854 "dif_pi_format": 0 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "bdev_wait_for_examine" 00:18:21.854 } 00:18:21.854 ] 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "subsystem": "nbd", 00:18:21.854 "config": [] 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "subsystem": "scheduler", 00:18:21.854 "config": [ 00:18:21.854 { 00:18:21.854 "method": "framework_set_scheduler", 00:18:21.854 "params": { 00:18:21.854 "name": "static" 00:18:21.854 } 00:18:21.854 } 00:18:21.854 ] 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "subsystem": "nvmf", 00:18:21.854 "config": [ 00:18:21.854 { 00:18:21.854 "method": "nvmf_set_config", 00:18:21.854 "params": { 00:18:21.854 "discovery_filter": "match_any", 00:18:21.854 "admin_cmd_passthru": { 00:18:21.854 "identify_ctrlr": false 00:18:21.854 }, 00:18:21.854 "dhchap_digests": [ 00:18:21.854 "sha256", 00:18:21.854 "sha384", 00:18:21.854 "sha512" 00:18:21.854 ], 00:18:21.854 "dhchap_dhgroups": [ 00:18:21.854 "null", 00:18:21.854 "ffdhe2048", 00:18:21.854 "ffdhe3072", 00:18:21.854 "ffdhe4096", 00:18:21.854 "ffdhe6144", 00:18:21.854 "ffdhe8192" 00:18:21.854 ] 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_set_max_subsystems", 00:18:21.854 "params": { 00:18:21.854 "max_subsystems": 1024 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_set_crdt", 00:18:21.854 "params": { 00:18:21.854 "crdt1": 0, 00:18:21.854 "crdt2": 0, 00:18:21.854 "crdt3": 0 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_create_transport", 00:18:21.854 "params": { 00:18:21.854 "trtype": "TCP", 00:18:21.854 "max_queue_depth": 128, 00:18:21.854 "max_io_qpairs_per_ctrlr": 127, 00:18:21.854 "in_capsule_data_size": 4096, 00:18:21.854 "max_io_size": 131072, 00:18:21.854 "io_unit_size": 131072, 00:18:21.854 "max_aq_depth": 128, 00:18:21.854 "num_shared_buffers": 511, 00:18:21.854 "buf_cache_size": 4294967295, 00:18:21.854 "dif_insert_or_strip": false, 00:18:21.854 "zcopy": false, 00:18:21.854 "c2h_success": false, 00:18:21.854 "sock_priority": 0, 00:18:21.854 "abort_timeout_sec": 1, 00:18:21.854 "ack_timeout": 0, 00:18:21.854 "data_wr_pool_size": 0 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_create_subsystem", 00:18:21.854 "params": { 00:18:21.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.854 "allow_any_host": false, 00:18:21.854 "serial_number": "SPDK00000000000001", 00:18:21.854 "model_number": "SPDK bdev Controller", 00:18:21.854 "max_namespaces": 10, 00:18:21.854 "min_cntlid": 1, 00:18:21.854 "max_cntlid": 65519, 00:18:21.854 "ana_reporting": false 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_subsystem_add_host", 00:18:21.854 "params": { 00:18:21.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.854 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.854 "psk": "key0" 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_subsystem_add_ns", 00:18:21.854 "params": { 00:18:21.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.854 "namespace": { 00:18:21.854 "nsid": 1, 00:18:21.854 "bdev_name": "malloc0", 00:18:21.854 "nguid": "952713CD6AF64A41AACB695C1B1C169E", 00:18:21.854 "uuid": "952713cd-6af6-4a41-aacb-695c1b1c169e", 00:18:21.854 "no_auto_visible": false 00:18:21.854 } 00:18:21.854 } 00:18:21.854 }, 00:18:21.854 { 00:18:21.854 "method": "nvmf_subsystem_add_listener", 00:18:21.854 "params": { 00:18:21.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.854 "listen_address": { 00:18:21.854 "trtype": "TCP", 00:18:21.854 "adrfam": "IPv4", 00:18:21.854 "traddr": "10.0.0.2", 00:18:21.854 "trsvcid": "4420" 00:18:21.854 }, 00:18:21.854 "secure_channel": true 00:18:21.854 } 00:18:21.854 } 00:18:21.854 ] 00:18:21.854 } 00:18:21.854 ] 00:18:21.854 }' 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2524406 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2524406 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2524406 ']' 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.854 07:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.112 [2024-11-20 07:20:25.301842] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:22.112 [2024-11-20 07:20:25.301930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.112 [2024-11-20 07:20:25.371429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.112 [2024-11-20 07:20:25.426831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.112 [2024-11-20 07:20:25.426884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.112 [2024-11-20 07:20:25.426913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.112 [2024-11-20 07:20:25.426924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.112 [2024-11-20 07:20:25.426935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.112 [2024-11-20 07:20:25.427571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.370 [2024-11-20 07:20:25.678274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.370 [2024-11-20 07:20:25.710331] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.370 [2024-11-20 07:20:25.710581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2524554 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2524554 /var/tmp/bdevperf.sock 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2524554 ']' 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.935 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:22.935 "subsystems": [ 00:18:22.935 { 00:18:22.935 "subsystem": "keyring", 00:18:22.935 "config": [ 00:18:22.935 { 00:18:22.935 "method": "keyring_file_add_key", 00:18:22.935 "params": { 00:18:22.935 "name": "key0", 00:18:22.935 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:22.935 } 00:18:22.935 } 00:18:22.935 ] 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "subsystem": "iobuf", 00:18:22.935 "config": [ 00:18:22.935 { 00:18:22.935 "method": "iobuf_set_options", 00:18:22.935 "params": { 00:18:22.935 "small_pool_count": 8192, 00:18:22.935 "large_pool_count": 1024, 00:18:22.935 "small_bufsize": 8192, 00:18:22.935 "large_bufsize": 135168, 00:18:22.935 "enable_numa": false 00:18:22.935 } 00:18:22.935 } 00:18:22.935 ] 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "subsystem": "sock", 00:18:22.935 "config": [ 00:18:22.935 { 00:18:22.935 "method": "sock_set_default_impl", 00:18:22.935 "params": { 00:18:22.935 "impl_name": "posix" 00:18:22.935 } 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "method": "sock_impl_set_options", 00:18:22.935 "params": { 00:18:22.935 "impl_name": "ssl", 00:18:22.935 "recv_buf_size": 4096, 00:18:22.935 "send_buf_size": 4096, 00:18:22.935 "enable_recv_pipe": true, 00:18:22.935 "enable_quickack": false, 00:18:22.935 "enable_placement_id": 0, 00:18:22.935 "enable_zerocopy_send_server": true, 00:18:22.935 "enable_zerocopy_send_client": false, 00:18:22.935 "zerocopy_threshold": 0, 00:18:22.935 "tls_version": 0, 00:18:22.935 "enable_ktls": false 00:18:22.935 } 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "method": "sock_impl_set_options", 00:18:22.935 "params": { 00:18:22.935 "impl_name": "posix", 00:18:22.935 "recv_buf_size": 2097152, 00:18:22.935 "send_buf_size": 2097152, 00:18:22.935 "enable_recv_pipe": true, 00:18:22.935 "enable_quickack": false, 00:18:22.935 "enable_placement_id": 0, 00:18:22.935 "enable_zerocopy_send_server": true, 00:18:22.935 "enable_zerocopy_send_client": false, 00:18:22.935 "zerocopy_threshold": 0, 00:18:22.935 "tls_version": 0, 00:18:22.935 "enable_ktls": false 00:18:22.935 } 00:18:22.935 } 00:18:22.935 ] 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "subsystem": "vmd", 00:18:22.935 "config": [] 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "subsystem": "accel", 00:18:22.935 "config": [ 00:18:22.935 { 00:18:22.935 "method": "accel_set_options", 00:18:22.935 "params": { 00:18:22.935 "small_cache_size": 128, 00:18:22.935 "large_cache_size": 16, 00:18:22.935 "task_count": 2048, 00:18:22.935 "sequence_count": 2048, 00:18:22.935 "buf_count": 2048 00:18:22.935 } 00:18:22.935 } 00:18:22.935 ] 00:18:22.935 }, 00:18:22.935 { 00:18:22.935 "subsystem": "bdev", 00:18:22.935 "config": [ 00:18:22.935 { 00:18:22.935 "method": "bdev_set_options", 00:18:22.935 "params": { 00:18:22.935 "bdev_io_pool_size": 65535, 00:18:22.935 "bdev_io_cache_size": 256, 00:18:22.935 "bdev_auto_examine": true, 00:18:22.935 "iobuf_small_cache_size": 128, 00:18:22.935 "iobuf_large_cache_size": 16 00:18:22.936 } 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "method": "bdev_raid_set_options", 00:18:22.936 "params": { 00:18:22.936 "process_window_size_kb": 1024, 00:18:22.936 "process_max_bandwidth_mb_sec": 0 00:18:22.936 } 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "method": "bdev_iscsi_set_options", 00:18:22.936 "params": { 00:18:22.936 "timeout_sec": 30 00:18:22.936 } 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "method": "bdev_nvme_set_options", 00:18:22.936 "params": { 00:18:22.936 "action_on_timeout": "none", 00:18:22.936 "timeout_us": 0, 00:18:22.936 "timeout_admin_us": 0, 00:18:22.936 "keep_alive_timeout_ms": 10000, 00:18:22.936 "arbitration_burst": 0, 00:18:22.936 "low_priority_weight": 0, 00:18:22.936 "medium_priority_weight": 0, 00:18:22.936 "high_priority_weight": 0, 00:18:22.936 "nvme_adminq_poll_period_us": 10000, 00:18:22.936 "nvme_ioq_poll_period_us": 0, 00:18:22.936 "io_queue_requests": 512, 00:18:22.936 "delay_cmd_submit": true, 00:18:22.936 "transport_retry_count": 4, 00:18:22.936 "bdev_retry_count": 3, 00:18:22.936 "transport_ack_timeout": 0, 00:18:22.936 "ctrlr_loss_timeout_sec": 0, 00:18:22.936 "reconnect_delay_sec": 0, 00:18:22.936 "fast_io_fail_timeout_sec": 0, 00:18:22.936 "disable_auto_failback": false, 00:18:22.936 "generate_uuids": false, 00:18:22.936 "transport_tos": 0, 00:18:22.936 "nvme_error_stat": false, 00:18:22.936 "rdma_srq_size": 0, 00:18:22.936 "io_path_stat": false, 00:18:22.936 "allow_accel_sequence": false, 00:18:22.936 "rdma_max_cq_size": 0, 00:18:22.936 "rdma_cm_event_timeout_ms": 0, 00:18:22.936 "dhchap_digests": [ 00:18:22.936 "sha256", 00:18:22.936 "sha384", 00:18:22.936 "sha512" 00:18:22.936 ], 00:18:22.936 "dhchap_dhgroups": [ 00:18:22.936 "null", 00:18:22.936 "ffdhe2048", 00:18:22.936 "ffdhe3072", 00:18:22.936 "ffdhe4096", 00:18:22.936 "ffdhe6144", 00:18:22.936 "ffdhe8192" 00:18:22.936 ] 00:18:22.936 } 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "method": "bdev_nvme_attach_controller", 00:18:22.936 "params": { 00:18:22.936 "name": "TLSTEST", 00:18:22.936 "trtype": "TCP", 00:18:22.936 "adrfam": "IPv4", 00:18:22.936 "traddr": "10.0.0.2", 00:18:22.936 "trsvcid": "4420", 00:18:22.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.936 "prchk_reftag": false, 00:18:22.936 "prchk_guard": false, 00:18:22.936 "ctrlr_loss_timeout_sec": 0, 00:18:22.936 "reconnect_delay_sec": 0, 00:18:22.936 "fast_io_fail_timeout_sec": 0, 00:18:22.936 "psk": "key0", 00:18:22.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.936 "hdgst": false, 00:18:22.936 "ddgst": false, 00:18:22.936 "multipath": "multipath" 00:18:22.936 } 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "method": "bdev_nvme_set_hotplug", 00:18:22.936 "params": { 00:18:22.936 "period_us": 100000, 00:18:22.936 "enable": false 00:18:22.936 } 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "method": "bdev_wait_for_examine" 00:18:22.936 } 00:18:22.936 ] 00:18:22.936 }, 00:18:22.936 { 00:18:22.936 "subsystem": "nbd", 00:18:22.936 "config": [] 00:18:22.936 } 00:18:22.936 ] 00:18:22.936 }' 00:18:22.936 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.936 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.936 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.936 [2024-11-20 07:20:26.353542] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:22.936 [2024-11-20 07:20:26.353633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524554 ] 00:18:23.193 [2024-11-20 07:20:26.419035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.194 [2024-11-20 07:20:26.476388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.451 [2024-11-20 07:20:26.661374] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.451 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:23.451 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:23.451 07:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:23.709 Running I/O for 10 seconds... 00:18:25.629 3457.00 IOPS, 13.50 MiB/s [2024-11-20T06:20:29.994Z] 3502.00 IOPS, 13.68 MiB/s [2024-11-20T06:20:30.927Z] 3513.33 IOPS, 13.72 MiB/s [2024-11-20T06:20:32.298Z] 3508.75 IOPS, 13.71 MiB/s [2024-11-20T06:20:33.231Z] 3517.40 IOPS, 13.74 MiB/s [2024-11-20T06:20:34.164Z] 3527.67 IOPS, 13.78 MiB/s [2024-11-20T06:20:35.096Z] 3521.00 IOPS, 13.75 MiB/s [2024-11-20T06:20:36.030Z] 3510.00 IOPS, 13.71 MiB/s [2024-11-20T06:20:36.963Z] 3509.56 IOPS, 13.71 MiB/s [2024-11-20T06:20:36.963Z] 3508.50 IOPS, 13.71 MiB/s 00:18:33.530 Latency(us) 00:18:33.530 [2024-11-20T06:20:36.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.530 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:33.530 Verification LBA range: start 0x0 length 0x2000 00:18:33.530 TLSTESTn1 : 10.02 3513.76 13.73 0.00 0.00 36367.85 7670.14 33787.45 00:18:33.530 [2024-11-20T06:20:36.963Z] =================================================================================================================== 00:18:33.530 [2024-11-20T06:20:36.963Z] Total : 3513.76 13.73 0.00 0.00 36367.85 7670.14 33787.45 00:18:33.530 { 00:18:33.530 "results": [ 00:18:33.530 { 00:18:33.530 "job": "TLSTESTn1", 00:18:33.530 "core_mask": "0x4", 00:18:33.530 "workload": "verify", 00:18:33.530 "status": "finished", 00:18:33.530 "verify_range": { 00:18:33.530 "start": 0, 00:18:33.530 "length": 8192 00:18:33.530 }, 00:18:33.530 "queue_depth": 128, 00:18:33.530 "io_size": 4096, 00:18:33.530 "runtime": 10.020897, 00:18:33.530 "iops": 3513.7573013673327, 00:18:33.530 "mibps": 13.725614458466143, 00:18:33.530 "io_failed": 0, 00:18:33.530 "io_timeout": 0, 00:18:33.530 "avg_latency_us": 36367.85275914408, 00:18:33.530 "min_latency_us": 7670.139259259259, 00:18:33.530 "max_latency_us": 33787.44888888889 00:18:33.530 } 00:18:33.530 ], 00:18:33.530 "core_count": 1 00:18:33.530 } 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2524554 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2524554 ']' 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2524554 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:33.789 07:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2524554 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2524554' 00:18:33.789 killing process with pid 2524554 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2524554 00:18:33.789 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.789 00:18:33.789 Latency(us) 00:18:33.789 [2024-11-20T06:20:37.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.789 [2024-11-20T06:20:37.222Z] =================================================================================================================== 00:18:33.789 [2024-11-20T06:20:37.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2524554 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2524406 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2524406 ']' 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2524406 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:33.789 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2524406 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2524406' 00:18:34.048 killing process with pid 2524406 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2524406 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2524406 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2525877 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2525877 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2525877 ']' 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:34.048 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.306 [2024-11-20 07:20:37.525453] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:34.306 [2024-11-20 07:20:37.525538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.306 [2024-11-20 07:20:37.596974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.306 [2024-11-20 07:20:37.654110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.306 [2024-11-20 07:20:37.654165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.306 [2024-11-20 07:20:37.654193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.306 [2024-11-20 07:20:37.654204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.306 [2024-11-20 07:20:37.654214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.306 [2024-11-20 07:20:37.654811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.yOPKBkQtAb 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yOPKBkQtAb 00:18:34.564 07:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:34.822 [2024-11-20 07:20:38.104777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.822 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.079 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.337 [2024-11-20 07:20:38.702444] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.337 [2024-11-20 07:20:38.702718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.337 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.595 malloc0 00:18:35.595 07:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.852 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:36.416 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2526173 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2526173 /var/tmp/bdevperf.sock 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2526173 ']' 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.674 07:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.674 [2024-11-20 07:20:39.907492] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:36.674 [2024-11-20 07:20:39.907575] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526173 ] 00:18:36.674 [2024-11-20 07:20:39.970989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.674 [2024-11-20 07:20:40.033710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.932 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.932 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:36.932 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:37.190 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:37.448 [2024-11-20 07:20:40.667045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.448 nvme0n1 00:18:37.448 07:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.448 Running I/O for 1 seconds... 00:18:38.823 3393.00 IOPS, 13.25 MiB/s 00:18:38.823 Latency(us) 00:18:38.823 [2024-11-20T06:20:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.823 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:38.823 Verification LBA range: start 0x0 length 0x2000 00:18:38.823 nvme0n1 : 1.02 3448.23 13.47 0.00 0.00 36801.85 6359.42 29709.65 00:18:38.823 [2024-11-20T06:20:42.256Z] =================================================================================================================== 00:18:38.823 [2024-11-20T06:20:42.256Z] Total : 3448.23 13.47 0.00 0.00 36801.85 6359.42 29709.65 00:18:38.823 { 00:18:38.823 "results": [ 00:18:38.823 { 00:18:38.823 "job": "nvme0n1", 00:18:38.823 "core_mask": "0x2", 00:18:38.823 "workload": "verify", 00:18:38.823 "status": "finished", 00:18:38.823 "verify_range": { 00:18:38.823 "start": 0, 00:18:38.823 "length": 8192 00:18:38.823 }, 00:18:38.823 "queue_depth": 128, 00:18:38.823 "io_size": 4096, 00:18:38.823 "runtime": 1.021105, 00:18:38.823 "iops": 3448.2252070061354, 00:18:38.823 "mibps": 13.469629714867716, 00:18:38.823 "io_failed": 0, 00:18:38.823 "io_timeout": 0, 00:18:38.823 "avg_latency_us": 36801.84844457067, 00:18:38.823 "min_latency_us": 6359.419259259259, 00:18:38.823 "max_latency_us": 29709.653333333332 00:18:38.823 } 00:18:38.823 ], 00:18:38.823 "core_count": 1 00:18:38.823 } 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2526173 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2526173 ']' 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2526173 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2526173 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2526173' 00:18:38.823 killing process with pid 2526173 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2526173 00:18:38.823 Received shutdown signal, test time was about 1.000000 seconds 00:18:38.823 00:18:38.823 Latency(us) 00:18:38.823 [2024-11-20T06:20:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.823 [2024-11-20T06:20:42.256Z] =================================================================================================================== 00:18:38.823 [2024-11-20T06:20:42.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.823 07:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2526173 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2525877 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2525877 ']' 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2525877 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2525877 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2525877' 00:18:38.823 killing process with pid 2525877 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2525877 00:18:38.823 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2525877 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2526453 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2526453 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2526453 ']' 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:39.081 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.081 [2024-11-20 07:20:42.451320] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:39.081 [2024-11-20 07:20:42.451401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.340 [2024-11-20 07:20:42.522799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.340 [2024-11-20 07:20:42.581217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.340 [2024-11-20 07:20:42.581273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.340 [2024-11-20 07:20:42.581309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.340 [2024-11-20 07:20:42.581323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.340 [2024-11-20 07:20:42.581333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.340 [2024-11-20 07:20:42.581970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.340 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.340 [2024-11-20 07:20:42.737747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.340 malloc0 00:18:39.340 [2024-11-20 07:20:42.769879] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.340 [2024-11-20 07:20:42.770140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2526474 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2526474 /var/tmp/bdevperf.sock 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2526474 ']' 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:39.599 07:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.599 [2024-11-20 07:20:42.841899] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:39.599 [2024-11-20 07:20:42.841972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526474 ] 00:18:39.599 [2024-11-20 07:20:42.906747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.599 [2024-11-20 07:20:42.965716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.857 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:39.857 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:39.857 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yOPKBkQtAb 00:18:40.114 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:40.372 [2024-11-20 07:20:43.633935] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.372 nvme0n1 00:18:40.372 07:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.630 Running I/O for 1 seconds... 00:18:41.564 3385.00 IOPS, 13.22 MiB/s 00:18:41.564 Latency(us) 00:18:41.564 [2024-11-20T06:20:44.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.564 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:41.564 Verification LBA range: start 0x0 length 0x2000 00:18:41.564 nvme0n1 : 1.03 3419.48 13.36 0.00 0.00 36961.14 6213.78 30486.38 00:18:41.564 [2024-11-20T06:20:44.997Z] =================================================================================================================== 00:18:41.564 [2024-11-20T06:20:44.997Z] Total : 3419.48 13.36 0.00 0.00 36961.14 6213.78 30486.38 00:18:41.564 { 00:18:41.564 "results": [ 00:18:41.564 { 00:18:41.564 "job": "nvme0n1", 00:18:41.564 "core_mask": "0x2", 00:18:41.564 "workload": "verify", 00:18:41.564 "status": "finished", 00:18:41.564 "verify_range": { 00:18:41.564 "start": 0, 00:18:41.564 "length": 8192 00:18:41.564 }, 00:18:41.564 "queue_depth": 128, 00:18:41.564 "io_size": 4096, 00:18:41.564 "runtime": 1.027348, 00:18:41.564 "iops": 3419.4839528572597, 00:18:41.564 "mibps": 13.35735919084867, 00:18:41.564 "io_failed": 0, 00:18:41.564 "io_timeout": 0, 00:18:41.564 "avg_latency_us": 36961.14046325289, 00:18:41.564 "min_latency_us": 6213.783703703703, 00:18:41.564 "max_latency_us": 30486.376296296297 00:18:41.564 } 00:18:41.564 ], 00:18:41.564 "core_count": 1 00:18:41.564 } 00:18:41.564 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:41.564 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.564 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.564 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.564 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:41.564 "subsystems": [ 00:18:41.564 { 00:18:41.564 "subsystem": "keyring", 00:18:41.564 "config": [ 00:18:41.564 { 00:18:41.564 "method": "keyring_file_add_key", 00:18:41.564 "params": { 00:18:41.564 "name": "key0", 00:18:41.564 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:41.564 } 00:18:41.564 } 00:18:41.564 ] 00:18:41.564 }, 00:18:41.564 { 00:18:41.564 "subsystem": "iobuf", 00:18:41.564 "config": [ 00:18:41.564 { 00:18:41.564 "method": "iobuf_set_options", 00:18:41.564 "params": { 00:18:41.564 "small_pool_count": 8192, 00:18:41.564 "large_pool_count": 1024, 00:18:41.564 "small_bufsize": 8192, 00:18:41.564 "large_bufsize": 135168, 00:18:41.564 "enable_numa": false 00:18:41.564 } 00:18:41.564 } 00:18:41.564 ] 00:18:41.564 }, 00:18:41.564 { 00:18:41.564 "subsystem": "sock", 00:18:41.564 "config": [ 00:18:41.564 { 00:18:41.564 "method": "sock_set_default_impl", 00:18:41.564 "params": { 00:18:41.564 "impl_name": "posix" 00:18:41.564 } 00:18:41.564 }, 00:18:41.564 { 00:18:41.564 "method": "sock_impl_set_options", 00:18:41.564 "params": { 00:18:41.564 "impl_name": "ssl", 00:18:41.564 "recv_buf_size": 4096, 00:18:41.565 "send_buf_size": 4096, 00:18:41.565 "enable_recv_pipe": true, 00:18:41.565 "enable_quickack": false, 00:18:41.565 "enable_placement_id": 0, 00:18:41.565 "enable_zerocopy_send_server": true, 00:18:41.565 "enable_zerocopy_send_client": false, 00:18:41.565 "zerocopy_threshold": 0, 00:18:41.565 "tls_version": 0, 00:18:41.565 "enable_ktls": false 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "sock_impl_set_options", 00:18:41.565 "params": { 00:18:41.565 "impl_name": "posix", 00:18:41.565 "recv_buf_size": 2097152, 00:18:41.565 "send_buf_size": 2097152, 00:18:41.565 "enable_recv_pipe": true, 00:18:41.565 "enable_quickack": false, 00:18:41.565 "enable_placement_id": 0, 00:18:41.565 "enable_zerocopy_send_server": true, 00:18:41.565 "enable_zerocopy_send_client": false, 00:18:41.565 "zerocopy_threshold": 0, 00:18:41.565 "tls_version": 0, 00:18:41.565 "enable_ktls": false 00:18:41.565 } 00:18:41.565 } 00:18:41.565 ] 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "subsystem": "vmd", 00:18:41.565 "config": [] 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "subsystem": "accel", 00:18:41.565 "config": [ 00:18:41.565 { 00:18:41.565 "method": "accel_set_options", 00:18:41.565 "params": { 00:18:41.565 "small_cache_size": 128, 00:18:41.565 "large_cache_size": 16, 00:18:41.565 "task_count": 2048, 00:18:41.565 "sequence_count": 2048, 00:18:41.565 "buf_count": 2048 00:18:41.565 } 00:18:41.565 } 00:18:41.565 ] 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "subsystem": "bdev", 00:18:41.565 "config": [ 00:18:41.565 { 00:18:41.565 "method": "bdev_set_options", 00:18:41.565 "params": { 00:18:41.565 "bdev_io_pool_size": 65535, 00:18:41.565 "bdev_io_cache_size": 256, 00:18:41.565 "bdev_auto_examine": true, 00:18:41.565 "iobuf_small_cache_size": 128, 00:18:41.565 "iobuf_large_cache_size": 16 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "bdev_raid_set_options", 00:18:41.565 "params": { 00:18:41.565 "process_window_size_kb": 1024, 00:18:41.565 "process_max_bandwidth_mb_sec": 0 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "bdev_iscsi_set_options", 00:18:41.565 "params": { 00:18:41.565 "timeout_sec": 30 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "bdev_nvme_set_options", 00:18:41.565 "params": { 00:18:41.565 "action_on_timeout": "none", 00:18:41.565 "timeout_us": 0, 00:18:41.565 "timeout_admin_us": 0, 00:18:41.565 "keep_alive_timeout_ms": 10000, 00:18:41.565 "arbitration_burst": 0, 00:18:41.565 "low_priority_weight": 0, 00:18:41.565 "medium_priority_weight": 0, 00:18:41.565 "high_priority_weight": 0, 00:18:41.565 "nvme_adminq_poll_period_us": 10000, 00:18:41.565 "nvme_ioq_poll_period_us": 0, 00:18:41.565 "io_queue_requests": 0, 00:18:41.565 "delay_cmd_submit": true, 00:18:41.565 "transport_retry_count": 4, 00:18:41.565 "bdev_retry_count": 3, 00:18:41.565 "transport_ack_timeout": 0, 00:18:41.565 "ctrlr_loss_timeout_sec": 0, 00:18:41.565 "reconnect_delay_sec": 0, 00:18:41.565 "fast_io_fail_timeout_sec": 0, 00:18:41.565 "disable_auto_failback": false, 00:18:41.565 "generate_uuids": false, 00:18:41.565 "transport_tos": 0, 00:18:41.565 "nvme_error_stat": false, 00:18:41.565 "rdma_srq_size": 0, 00:18:41.565 "io_path_stat": false, 00:18:41.565 "allow_accel_sequence": false, 00:18:41.565 "rdma_max_cq_size": 0, 00:18:41.565 "rdma_cm_event_timeout_ms": 0, 00:18:41.565 "dhchap_digests": [ 00:18:41.565 "sha256", 00:18:41.565 "sha384", 00:18:41.565 "sha512" 00:18:41.565 ], 00:18:41.565 "dhchap_dhgroups": [ 00:18:41.565 "null", 00:18:41.565 "ffdhe2048", 00:18:41.565 "ffdhe3072", 00:18:41.565 "ffdhe4096", 00:18:41.565 "ffdhe6144", 00:18:41.565 "ffdhe8192" 00:18:41.565 ] 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "bdev_nvme_set_hotplug", 00:18:41.565 "params": { 00:18:41.565 "period_us": 100000, 00:18:41.565 "enable": false 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "bdev_malloc_create", 00:18:41.565 "params": { 00:18:41.565 "name": "malloc0", 00:18:41.565 "num_blocks": 8192, 00:18:41.565 "block_size": 4096, 00:18:41.565 "physical_block_size": 4096, 00:18:41.565 "uuid": "39aa3cb2-cc8b-4f69-b31a-185ff42ffd69", 00:18:41.565 "optimal_io_boundary": 0, 00:18:41.565 "md_size": 0, 00:18:41.565 "dif_type": 0, 00:18:41.565 "dif_is_head_of_md": false, 00:18:41.565 "dif_pi_format": 0 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "bdev_wait_for_examine" 00:18:41.565 } 00:18:41.565 ] 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "subsystem": "nbd", 00:18:41.565 "config": [] 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "subsystem": "scheduler", 00:18:41.565 "config": [ 00:18:41.565 { 00:18:41.565 "method": "framework_set_scheduler", 00:18:41.565 "params": { 00:18:41.565 "name": "static" 00:18:41.565 } 00:18:41.565 } 00:18:41.565 ] 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "subsystem": "nvmf", 00:18:41.565 "config": [ 00:18:41.565 { 00:18:41.565 "method": "nvmf_set_config", 00:18:41.565 "params": { 00:18:41.565 "discovery_filter": "match_any", 00:18:41.565 "admin_cmd_passthru": { 00:18:41.565 "identify_ctrlr": false 00:18:41.565 }, 00:18:41.565 "dhchap_digests": [ 00:18:41.565 "sha256", 00:18:41.565 "sha384", 00:18:41.565 "sha512" 00:18:41.565 ], 00:18:41.565 "dhchap_dhgroups": [ 00:18:41.565 "null", 00:18:41.565 "ffdhe2048", 00:18:41.565 "ffdhe3072", 00:18:41.565 "ffdhe4096", 00:18:41.565 "ffdhe6144", 00:18:41.565 "ffdhe8192" 00:18:41.565 ] 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_set_max_subsystems", 00:18:41.565 "params": { 00:18:41.565 "max_subsystems": 1024 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_set_crdt", 00:18:41.565 "params": { 00:18:41.565 "crdt1": 0, 00:18:41.565 "crdt2": 0, 00:18:41.565 "crdt3": 0 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_create_transport", 00:18:41.565 "params": { 00:18:41.565 "trtype": "TCP", 00:18:41.565 "max_queue_depth": 128, 00:18:41.565 "max_io_qpairs_per_ctrlr": 127, 00:18:41.565 "in_capsule_data_size": 4096, 00:18:41.565 "max_io_size": 131072, 00:18:41.565 "io_unit_size": 131072, 00:18:41.565 "max_aq_depth": 128, 00:18:41.565 "num_shared_buffers": 511, 00:18:41.565 "buf_cache_size": 4294967295, 00:18:41.565 "dif_insert_or_strip": false, 00:18:41.565 "zcopy": false, 00:18:41.565 "c2h_success": false, 00:18:41.565 "sock_priority": 0, 00:18:41.565 "abort_timeout_sec": 1, 00:18:41.565 "ack_timeout": 0, 00:18:41.565 "data_wr_pool_size": 0 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_create_subsystem", 00:18:41.565 "params": { 00:18:41.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.565 "allow_any_host": false, 00:18:41.565 "serial_number": "00000000000000000000", 00:18:41.565 "model_number": "SPDK bdev Controller", 00:18:41.565 "max_namespaces": 32, 00:18:41.565 "min_cntlid": 1, 00:18:41.565 "max_cntlid": 65519, 00:18:41.565 "ana_reporting": false 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_subsystem_add_host", 00:18:41.565 "params": { 00:18:41.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.565 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.565 "psk": "key0" 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_subsystem_add_ns", 00:18:41.565 "params": { 00:18:41.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.565 "namespace": { 00:18:41.565 "nsid": 1, 00:18:41.565 "bdev_name": "malloc0", 00:18:41.565 "nguid": "39AA3CB2CC8B4F69B31A185FF42FFD69", 00:18:41.565 "uuid": "39aa3cb2-cc8b-4f69-b31a-185ff42ffd69", 00:18:41.565 "no_auto_visible": false 00:18:41.565 } 00:18:41.565 } 00:18:41.565 }, 00:18:41.565 { 00:18:41.565 "method": "nvmf_subsystem_add_listener", 00:18:41.565 "params": { 00:18:41.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.565 "listen_address": { 00:18:41.565 "trtype": "TCP", 00:18:41.565 "adrfam": "IPv4", 00:18:41.565 "traddr": "10.0.0.2", 00:18:41.565 "trsvcid": "4420" 00:18:41.565 }, 00:18:41.565 "secure_channel": false, 00:18:41.565 "sock_impl": "ssl" 00:18:41.565 } 00:18:41.565 } 00:18:41.565 ] 00:18:41.565 } 00:18:41.565 ] 00:18:41.565 }' 00:18:41.565 07:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:42.131 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:42.131 "subsystems": [ 00:18:42.131 { 00:18:42.131 "subsystem": "keyring", 00:18:42.131 "config": [ 00:18:42.131 { 00:18:42.131 "method": "keyring_file_add_key", 00:18:42.131 "params": { 00:18:42.131 "name": "key0", 00:18:42.131 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:42.131 } 00:18:42.131 } 00:18:42.131 ] 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "subsystem": "iobuf", 00:18:42.131 "config": [ 00:18:42.131 { 00:18:42.131 "method": "iobuf_set_options", 00:18:42.131 "params": { 00:18:42.131 "small_pool_count": 8192, 00:18:42.131 "large_pool_count": 1024, 00:18:42.131 "small_bufsize": 8192, 00:18:42.131 "large_bufsize": 135168, 00:18:42.131 "enable_numa": false 00:18:42.131 } 00:18:42.131 } 00:18:42.131 ] 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "subsystem": "sock", 00:18:42.131 "config": [ 00:18:42.131 { 00:18:42.131 "method": "sock_set_default_impl", 00:18:42.131 "params": { 00:18:42.131 "impl_name": "posix" 00:18:42.131 } 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "method": "sock_impl_set_options", 00:18:42.131 "params": { 00:18:42.131 "impl_name": "ssl", 00:18:42.131 "recv_buf_size": 4096, 00:18:42.131 "send_buf_size": 4096, 00:18:42.131 "enable_recv_pipe": true, 00:18:42.131 "enable_quickack": false, 00:18:42.131 "enable_placement_id": 0, 00:18:42.131 "enable_zerocopy_send_server": true, 00:18:42.131 "enable_zerocopy_send_client": false, 00:18:42.131 "zerocopy_threshold": 0, 00:18:42.131 "tls_version": 0, 00:18:42.131 "enable_ktls": false 00:18:42.131 } 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "method": "sock_impl_set_options", 00:18:42.131 "params": { 00:18:42.131 "impl_name": "posix", 00:18:42.131 "recv_buf_size": 2097152, 00:18:42.131 "send_buf_size": 2097152, 00:18:42.131 "enable_recv_pipe": true, 00:18:42.131 "enable_quickack": false, 00:18:42.131 "enable_placement_id": 0, 00:18:42.131 "enable_zerocopy_send_server": true, 00:18:42.131 "enable_zerocopy_send_client": false, 00:18:42.131 "zerocopy_threshold": 0, 00:18:42.131 "tls_version": 0, 00:18:42.131 "enable_ktls": false 00:18:42.131 } 00:18:42.131 } 00:18:42.131 ] 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "subsystem": "vmd", 00:18:42.131 "config": [] 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "subsystem": "accel", 00:18:42.131 "config": [ 00:18:42.131 { 00:18:42.131 "method": "accel_set_options", 00:18:42.131 "params": { 00:18:42.131 "small_cache_size": 128, 00:18:42.131 "large_cache_size": 16, 00:18:42.131 "task_count": 2048, 00:18:42.131 "sequence_count": 2048, 00:18:42.131 "buf_count": 2048 00:18:42.131 } 00:18:42.131 } 00:18:42.131 ] 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "subsystem": "bdev", 00:18:42.131 "config": [ 00:18:42.131 { 00:18:42.131 "method": "bdev_set_options", 00:18:42.131 "params": { 00:18:42.131 "bdev_io_pool_size": 65535, 00:18:42.131 "bdev_io_cache_size": 256, 00:18:42.131 "bdev_auto_examine": true, 00:18:42.131 "iobuf_small_cache_size": 128, 00:18:42.131 "iobuf_large_cache_size": 16 00:18:42.131 } 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "method": "bdev_raid_set_options", 00:18:42.131 "params": { 00:18:42.131 "process_window_size_kb": 1024, 00:18:42.131 "process_max_bandwidth_mb_sec": 0 00:18:42.131 } 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "method": "bdev_iscsi_set_options", 00:18:42.131 "params": { 00:18:42.131 "timeout_sec": 30 00:18:42.131 } 00:18:42.131 }, 00:18:42.131 { 00:18:42.131 "method": "bdev_nvme_set_options", 00:18:42.131 "params": { 00:18:42.131 "action_on_timeout": "none", 00:18:42.131 "timeout_us": 0, 00:18:42.131 "timeout_admin_us": 0, 00:18:42.131 "keep_alive_timeout_ms": 10000, 00:18:42.131 "arbitration_burst": 0, 00:18:42.131 "low_priority_weight": 0, 00:18:42.131 "medium_priority_weight": 0, 00:18:42.131 "high_priority_weight": 0, 00:18:42.131 "nvme_adminq_poll_period_us": 10000, 00:18:42.131 "nvme_ioq_poll_period_us": 0, 00:18:42.131 "io_queue_requests": 512, 00:18:42.131 "delay_cmd_submit": true, 00:18:42.131 "transport_retry_count": 4, 00:18:42.131 "bdev_retry_count": 3, 00:18:42.131 "transport_ack_timeout": 0, 00:18:42.131 "ctrlr_loss_timeout_sec": 0, 00:18:42.131 "reconnect_delay_sec": 0, 00:18:42.131 "fast_io_fail_timeout_sec": 0, 00:18:42.131 "disable_auto_failback": false, 00:18:42.131 "generate_uuids": false, 00:18:42.131 "transport_tos": 0, 00:18:42.131 "nvme_error_stat": false, 00:18:42.131 "rdma_srq_size": 0, 00:18:42.132 "io_path_stat": false, 00:18:42.132 "allow_accel_sequence": false, 00:18:42.132 "rdma_max_cq_size": 0, 00:18:42.132 "rdma_cm_event_timeout_ms": 0, 00:18:42.132 "dhchap_digests": [ 00:18:42.132 "sha256", 00:18:42.132 "sha384", 00:18:42.132 "sha512" 00:18:42.132 ], 00:18:42.132 "dhchap_dhgroups": [ 00:18:42.132 "null", 00:18:42.132 "ffdhe2048", 00:18:42.132 "ffdhe3072", 00:18:42.132 "ffdhe4096", 00:18:42.132 "ffdhe6144", 00:18:42.132 "ffdhe8192" 00:18:42.132 ] 00:18:42.132 } 00:18:42.132 }, 00:18:42.132 { 00:18:42.132 "method": "bdev_nvme_attach_controller", 00:18:42.132 "params": { 00:18:42.132 "name": "nvme0", 00:18:42.132 "trtype": "TCP", 00:18:42.132 "adrfam": "IPv4", 00:18:42.132 "traddr": "10.0.0.2", 00:18:42.132 "trsvcid": "4420", 00:18:42.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.132 "prchk_reftag": false, 00:18:42.132 "prchk_guard": false, 00:18:42.132 "ctrlr_loss_timeout_sec": 0, 00:18:42.132 "reconnect_delay_sec": 0, 00:18:42.132 "fast_io_fail_timeout_sec": 0, 00:18:42.132 "psk": "key0", 00:18:42.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.132 "hdgst": false, 00:18:42.132 "ddgst": false, 00:18:42.132 "multipath": "multipath" 00:18:42.132 } 00:18:42.132 }, 00:18:42.132 { 00:18:42.132 "method": "bdev_nvme_set_hotplug", 00:18:42.132 "params": { 00:18:42.132 "period_us": 100000, 00:18:42.132 "enable": false 00:18:42.132 } 00:18:42.132 }, 00:18:42.132 { 00:18:42.132 "method": "bdev_enable_histogram", 00:18:42.132 "params": { 00:18:42.132 "name": "nvme0n1", 00:18:42.132 "enable": true 00:18:42.132 } 00:18:42.132 }, 00:18:42.132 { 00:18:42.132 "method": "bdev_wait_for_examine" 00:18:42.132 } 00:18:42.132 ] 00:18:42.132 }, 00:18:42.132 { 00:18:42.132 "subsystem": "nbd", 00:18:42.132 "config": [] 00:18:42.132 } 00:18:42.132 ] 00:18:42.132 }' 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2526474 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2526474 ']' 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2526474 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2526474 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2526474' 00:18:42.132 killing process with pid 2526474 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2526474 00:18:42.132 Received shutdown signal, test time was about 1.000000 seconds 00:18:42.132 00:18:42.132 Latency(us) 00:18:42.132 [2024-11-20T06:20:45.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.132 [2024-11-20T06:20:45.565Z] =================================================================================================================== 00:18:42.132 [2024-11-20T06:20:45.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.132 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2526474 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2526453 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2526453 ']' 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2526453 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2526453 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2526453' 00:18:42.390 killing process with pid 2526453 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2526453 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2526453 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.390 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:42.390 "subsystems": [ 00:18:42.390 { 00:18:42.390 "subsystem": "keyring", 00:18:42.390 "config": [ 00:18:42.390 { 00:18:42.390 "method": "keyring_file_add_key", 00:18:42.390 "params": { 00:18:42.390 "name": "key0", 00:18:42.390 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:42.390 } 00:18:42.390 } 00:18:42.390 ] 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "subsystem": "iobuf", 00:18:42.390 "config": [ 00:18:42.390 { 00:18:42.390 "method": "iobuf_set_options", 00:18:42.390 "params": { 00:18:42.390 "small_pool_count": 8192, 00:18:42.390 "large_pool_count": 1024, 00:18:42.390 "small_bufsize": 8192, 00:18:42.390 "large_bufsize": 135168, 00:18:42.390 "enable_numa": false 00:18:42.390 } 00:18:42.390 } 00:18:42.390 ] 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "subsystem": "sock", 00:18:42.390 "config": [ 00:18:42.390 { 00:18:42.390 "method": "sock_set_default_impl", 00:18:42.390 "params": { 00:18:42.390 "impl_name": "posix" 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "sock_impl_set_options", 00:18:42.390 "params": { 00:18:42.390 "impl_name": "ssl", 00:18:42.390 "recv_buf_size": 4096, 00:18:42.390 "send_buf_size": 4096, 00:18:42.390 "enable_recv_pipe": true, 00:18:42.390 "enable_quickack": false, 00:18:42.390 "enable_placement_id": 0, 00:18:42.390 "enable_zerocopy_send_server": true, 00:18:42.390 "enable_zerocopy_send_client": false, 00:18:42.390 "zerocopy_threshold": 0, 00:18:42.390 "tls_version": 0, 00:18:42.390 "enable_ktls": false 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "sock_impl_set_options", 00:18:42.390 "params": { 00:18:42.390 "impl_name": "posix", 00:18:42.390 "recv_buf_size": 2097152, 00:18:42.390 "send_buf_size": 2097152, 00:18:42.390 "enable_recv_pipe": true, 00:18:42.390 "enable_quickack": false, 00:18:42.390 "enable_placement_id": 0, 00:18:42.390 "enable_zerocopy_send_server": true, 00:18:42.390 "enable_zerocopy_send_client": false, 00:18:42.390 "zerocopy_threshold": 0, 00:18:42.390 "tls_version": 0, 00:18:42.390 "enable_ktls": false 00:18:42.390 } 00:18:42.390 } 00:18:42.390 ] 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "subsystem": "vmd", 00:18:42.390 "config": [] 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "subsystem": "accel", 00:18:42.390 "config": [ 00:18:42.390 { 00:18:42.390 "method": "accel_set_options", 00:18:42.390 "params": { 00:18:42.390 "small_cache_size": 128, 00:18:42.390 "large_cache_size": 16, 00:18:42.390 "task_count": 2048, 00:18:42.390 "sequence_count": 2048, 00:18:42.390 "buf_count": 2048 00:18:42.390 } 00:18:42.390 } 00:18:42.390 ] 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "subsystem": "bdev", 00:18:42.390 "config": [ 00:18:42.390 { 00:18:42.390 "method": "bdev_set_options", 00:18:42.390 "params": { 00:18:42.390 "bdev_io_pool_size": 65535, 00:18:42.390 "bdev_io_cache_size": 256, 00:18:42.390 "bdev_auto_examine": true, 00:18:42.390 "iobuf_small_cache_size": 128, 00:18:42.390 "iobuf_large_cache_size": 16 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_raid_set_options", 00:18:42.390 "params": { 00:18:42.390 "process_window_size_kb": 1024, 00:18:42.390 "process_max_bandwidth_mb_sec": 0 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_iscsi_set_options", 00:18:42.390 "params": { 00:18:42.390 "timeout_sec": 30 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_nvme_set_options", 00:18:42.390 "params": { 00:18:42.390 "action_on_timeout": "none", 00:18:42.390 "timeout_us": 0, 00:18:42.390 "timeout_admin_us": 0, 00:18:42.390 "keep_alive_timeout_ms": 10000, 00:18:42.390 "arbitration_burst": 0, 00:18:42.390 "low_priority_weight": 0, 00:18:42.390 "medium_priority_weight": 0, 00:18:42.390 "high_priority_weight": 0, 00:18:42.390 "nvme_adminq_poll_period_us": 10000, 00:18:42.390 "nvme_ioq_poll_period_us": 0, 00:18:42.390 "io_queue_requests": 0, 00:18:42.391 "delay_cmd_submit": true, 00:18:42.391 "transport_retry_count": 4, 00:18:42.391 "bdev_retry_count": 3, 00:18:42.391 "transport_ack_timeout": 0, 00:18:42.391 "ctrlr_loss_timeout_sec": 0, 00:18:42.391 "reconnect_delay_sec": 0, 00:18:42.391 "fast_io_fail_timeout_sec": 0, 00:18:42.391 "disable_auto_failback": false, 00:18:42.391 "generate_uuids": false, 00:18:42.391 "transport_tos": 0, 00:18:42.391 "nvme_error_stat": false, 00:18:42.391 "rdma_srq_size": 0, 00:18:42.391 "io_path_stat": false, 00:18:42.391 "allow_accel_sequence": false, 00:18:42.391 "rdma_max_cq_size": 0, 00:18:42.391 "rdma_cm_event_timeout_ms": 0, 00:18:42.391 "dhchap_digests": [ 00:18:42.391 "sha256", 00:18:42.391 "sha384", 00:18:42.391 "sha512" 00:18:42.391 ], 00:18:42.391 "dhchap_dhgroups": [ 00:18:42.391 "null", 00:18:42.391 "ffdhe2048", 00:18:42.391 "ffdhe3072", 00:18:42.391 "ffdhe4096", 00:18:42.391 "ffdhe6144", 00:18:42.391 "ffdhe8192" 00:18:42.391 ] 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "bdev_nvme_set_hotplug", 00:18:42.391 "params": { 00:18:42.391 "period_us": 100000, 00:18:42.391 "enable": false 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "bdev_malloc_create", 00:18:42.391 "params": { 00:18:42.391 "name": "malloc0", 00:18:42.391 "num_blocks": 8192, 00:18:42.391 "block_size": 4096, 00:18:42.391 "physical_block_size": 4096, 00:18:42.391 "uuid": "39aa3cb2-cc8b-4f69-b31a-185ff42ffd69", 00:18:42.391 "optimal_io_boundary": 0, 00:18:42.391 "md_size": 0, 00:18:42.391 "dif_type": 0, 00:18:42.391 "dif_is_head_of_md": false, 00:18:42.391 "dif_pi_format": 0 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "bdev_wait_for_examine" 00:18:42.391 } 00:18:42.391 ] 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "subsystem": "nbd", 00:18:42.391 "config": [] 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "subsystem": "scheduler", 00:18:42.391 "config": [ 00:18:42.391 { 00:18:42.391 "method": "framework_set_scheduler", 00:18:42.391 "params": { 00:18:42.391 "name": "static" 00:18:42.391 } 00:18:42.391 } 00:18:42.391 ] 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "subsystem": "nvmf", 00:18:42.391 "config": [ 00:18:42.391 { 00:18:42.391 "method": "nvmf_set_config", 00:18:42.391 "params": { 00:18:42.391 "discovery_filter": "match_any", 00:18:42.391 "admin_cmd_passthru": { 00:18:42.391 "identify_ctrlr": false 00:18:42.391 }, 00:18:42.391 "dhchap_digests": [ 00:18:42.391 "sha256", 00:18:42.391 "sha384", 00:18:42.391 "sha512" 00:18:42.391 ], 00:18:42.391 "dhchap_dhgroups": [ 00:18:42.391 "null", 00:18:42.391 "ffdhe2048", 00:18:42.391 "ffdhe3072", 00:18:42.391 "ffdhe4096", 00:18:42.391 "ffdhe6144", 00:18:42.391 "ffdhe8192" 00:18:42.391 ] 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_set_max_subsystems", 00:18:42.391 "params": { 00:18:42.391 "max_subsystems": 1024 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_set_crdt", 00:18:42.391 "params": { 00:18:42.391 "crdt1": 0, 00:18:42.391 "crdt2": 0, 00:18:42.391 "crdt3": 0 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_create_transport", 00:18:42.391 "params": { 00:18:42.391 "trtype": "TCP", 00:18:42.391 "max_queue_depth": 128, 00:18:42.391 "max_io_qpairs_per_ctrlr": 127, 00:18:42.391 "in_capsule_data_size": 4096, 00:18:42.391 "max_io_size": 131072, 00:18:42.391 "io_unit_size": 131072, 00:18:42.391 "max_aq_depth": 128, 00:18:42.391 "num_shared_buffers": 511, 00:18:42.391 "buf_cache_size": 4294967295, 00:18:42.391 "dif_insert_or_strip": false, 00:18:42.391 "zcopy": false, 00:18:42.391 "c2h_success": false, 00:18:42.391 "sock_priority": 0, 00:18:42.391 "abort_timeout_sec": 1, 00:18:42.391 "ack_timeout": 0, 00:18:42.391 "data_wr_pool_size": 0 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_create_subsystem", 00:18:42.391 "params": { 00:18:42.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.391 "allow_any_host": false, 00:18:42.391 "serial_number": "00000000000000000000", 00:18:42.391 "model_number": "SPDK bdev Controller", 00:18:42.391 "max_namespaces": 32, 00:18:42.391 "min_cntlid": 1, 00:18:42.391 "max_cntlid": 65519, 00:18:42.391 "ana_reporting": false 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_subsystem_add_host", 00:18:42.391 "params": { 00:18:42.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.391 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.391 "psk": "key0" 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_subsystem_add_ns", 00:18:42.391 "params": { 00:18:42.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.391 "namespace": { 00:18:42.391 "nsid": 1, 00:18:42.391 "bdev_name": "malloc0", 00:18:42.391 "nguid": "39AA3CB2CC8B4F69B31A185FF42FFD69", 00:18:42.391 "uuid": "39aa3cb2-cc8b-4f69-b31a-185ff42ffd69", 00:18:42.391 "no_auto_visible": false 00:18:42.391 } 00:18:42.391 } 00:18:42.391 }, 00:18:42.391 { 00:18:42.391 "method": "nvmf_subsystem_add_listener", 00:18:42.391 "params": { 00:18:42.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.391 "listen_address": { 00:18:42.391 "trtype": "TCP", 00:18:42.391 "adrfam": "IPv4", 00:18:42.391 "traddr": "10.0.0.2", 00:18:42.391 "trsvcid": "4420" 00:18:42.391 }, 00:18:42.391 "secure_channel": false, 00:18:42.391 "sock_impl": "ssl" 00:18:42.391 } 00:18:42.391 } 00:18:42.391 ] 00:18:42.391 } 00:18:42.391 ] 00:18:42.391 }' 00:18:42.391 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2526884 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2526884 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2526884 ']' 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.649 07:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.649 [2024-11-20 07:20:45.875525] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:42.649 [2024-11-20 07:20:45.875603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.649 [2024-11-20 07:20:45.948330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.649 [2024-11-20 07:20:46.005759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.649 [2024-11-20 07:20:46.005815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.649 [2024-11-20 07:20:46.005844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.649 [2024-11-20 07:20:46.005856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.649 [2024-11-20 07:20:46.005865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.649 [2024-11-20 07:20:46.006499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.907 [2024-11-20 07:20:46.251739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.907 [2024-11-20 07:20:46.283778] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.907 [2024-11-20 07:20:46.284003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2527035 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2527035 /var/tmp/bdevperf.sock 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2527035 ']' 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.474 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:43.475 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:43.475 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.475 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:43.475 "subsystems": [ 00:18:43.475 { 00:18:43.475 "subsystem": "keyring", 00:18:43.475 "config": [ 00:18:43.475 { 00:18:43.475 "method": "keyring_file_add_key", 00:18:43.475 "params": { 00:18:43.475 "name": "key0", 00:18:43.475 "path": "/tmp/tmp.yOPKBkQtAb" 00:18:43.475 } 00:18:43.475 } 00:18:43.475 ] 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "subsystem": "iobuf", 00:18:43.475 "config": [ 00:18:43.475 { 00:18:43.475 "method": "iobuf_set_options", 00:18:43.475 "params": { 00:18:43.475 "small_pool_count": 8192, 00:18:43.475 "large_pool_count": 1024, 00:18:43.475 "small_bufsize": 8192, 00:18:43.475 "large_bufsize": 135168, 00:18:43.475 "enable_numa": false 00:18:43.475 } 00:18:43.475 } 00:18:43.475 ] 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "subsystem": "sock", 00:18:43.475 "config": [ 00:18:43.475 { 00:18:43.475 "method": "sock_set_default_impl", 00:18:43.475 "params": { 00:18:43.475 "impl_name": "posix" 00:18:43.475 } 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "method": "sock_impl_set_options", 00:18:43.475 "params": { 00:18:43.475 "impl_name": "ssl", 00:18:43.475 "recv_buf_size": 4096, 00:18:43.475 "send_buf_size": 4096, 00:18:43.475 "enable_recv_pipe": true, 00:18:43.475 "enable_quickack": false, 00:18:43.475 "enable_placement_id": 0, 00:18:43.475 "enable_zerocopy_send_server": true, 00:18:43.475 "enable_zerocopy_send_client": false, 00:18:43.475 "zerocopy_threshold": 0, 00:18:43.475 "tls_version": 0, 00:18:43.475 "enable_ktls": false 00:18:43.475 } 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "method": "sock_impl_set_options", 00:18:43.475 "params": { 00:18:43.475 "impl_name": "posix", 00:18:43.475 "recv_buf_size": 2097152, 00:18:43.475 "send_buf_size": 2097152, 00:18:43.475 "enable_recv_pipe": true, 00:18:43.475 "enable_quickack": false, 00:18:43.475 "enable_placement_id": 0, 00:18:43.475 "enable_zerocopy_send_server": true, 00:18:43.475 "enable_zerocopy_send_client": false, 00:18:43.475 "zerocopy_threshold": 0, 00:18:43.475 "tls_version": 0, 00:18:43.475 "enable_ktls": false 00:18:43.475 } 00:18:43.475 } 00:18:43.475 ] 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "subsystem": "vmd", 00:18:43.475 "config": [] 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "subsystem": "accel", 00:18:43.475 "config": [ 00:18:43.475 { 00:18:43.475 "method": "accel_set_options", 00:18:43.475 "params": { 00:18:43.475 "small_cache_size": 128, 00:18:43.475 "large_cache_size": 16, 00:18:43.475 "task_count": 2048, 00:18:43.475 "sequence_count": 2048, 00:18:43.475 "buf_count": 2048 00:18:43.475 } 00:18:43.475 } 00:18:43.475 ] 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "subsystem": "bdev", 00:18:43.475 "config": [ 00:18:43.475 { 00:18:43.475 "method": "bdev_set_options", 00:18:43.475 "params": { 00:18:43.475 "bdev_io_pool_size": 65535, 00:18:43.475 "bdev_io_cache_size": 256, 00:18:43.475 "bdev_auto_examine": true, 00:18:43.475 "iobuf_small_cache_size": 128, 00:18:43.475 "iobuf_large_cache_size": 16 00:18:43.475 } 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "method": "bdev_raid_set_options", 00:18:43.475 "params": { 00:18:43.475 "process_window_size_kb": 1024, 00:18:43.475 "process_max_bandwidth_mb_sec": 0 00:18:43.475 } 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "method": "bdev_iscsi_set_options", 00:18:43.475 "params": { 00:18:43.475 "timeout_sec": 30 00:18:43.475 } 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "method": "bdev_nvme_set_options", 00:18:43.475 "params": { 00:18:43.475 "action_on_timeout": "none", 00:18:43.475 "timeout_us": 0, 00:18:43.475 "timeout_admin_us": 0, 00:18:43.475 "keep_alive_timeout_ms": 10000, 00:18:43.475 "arbitration_burst": 0, 00:18:43.475 "low_priority_weight": 0, 00:18:43.475 "medium_priority_weight": 0, 00:18:43.475 "high_priority_weight": 0, 00:18:43.475 "nvme_adminq_poll_period_us": 10000, 00:18:43.475 "nvme_ioq_poll_period_us": 0, 00:18:43.475 "io_queue_requests": 512, 00:18:43.475 "delay_cmd_submit": true, 00:18:43.475 "transport_retry_count": 4, 00:18:43.475 "bdev_retry_count": 3, 00:18:43.475 "transport_ack_timeout": 0, 00:18:43.475 "ctrlr_loss_timeout_sec": 0, 00:18:43.475 "reconnect_delay_sec": 0, 00:18:43.475 "fast_io_fail_timeout_sec": 0, 00:18:43.475 "disable_auto_failback": false, 00:18:43.475 "generate_uuids": false, 00:18:43.475 "transport_tos": 0, 00:18:43.475 "nvme_error_stat": false, 00:18:43.475 "rdma_srq_size": 0, 00:18:43.475 "io_path_stat": false, 00:18:43.475 "allow_accel_sequence": false, 00:18:43.475 "rdma_max_cq_size": 0, 00:18:43.475 "rdma_cm_event_timeout_ms": 0, 00:18:43.475 "dhchap_digests": [ 00:18:43.475 "sha256", 00:18:43.475 "sha384", 00:18:43.475 "sha512" 00:18:43.475 ], 00:18:43.475 "dhchap_dhgroups": [ 00:18:43.475 "null", 00:18:43.475 "ffdhe2048", 00:18:43.475 "ffdhe3072", 00:18:43.475 "ffdhe4096", 00:18:43.475 "ffdhe6144", 00:18:43.475 "ffdhe8192" 00:18:43.475 ] 00:18:43.475 } 00:18:43.475 }, 00:18:43.475 { 00:18:43.475 "method": "bdev_nvme_attach_controller", 00:18:43.475 "params": { 00:18:43.475 "name": "nvme0", 00:18:43.475 "trtype": "TCP", 00:18:43.475 "adrfam": "IPv4", 00:18:43.475 "traddr": "10.0.0.2", 00:18:43.475 "trsvcid": "4420", 00:18:43.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.475 "prchk_reftag": false, 00:18:43.475 "prchk_guard": false, 00:18:43.475 "ctrlr_loss_timeout_sec": 0, 00:18:43.475 "reconnect_delay_sec": 0, 00:18:43.475 "fast_io_fail_timeout_sec": 0, 00:18:43.475 "psk": "key0", 00:18:43.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.475 "hdgst": false, 00:18:43.475 "ddgst": false, 00:18:43.475 "multipath": "multipath" 00:18:43.476 } 00:18:43.476 }, 00:18:43.476 { 00:18:43.476 "method": "bdev_nvme_set_hotplug", 00:18:43.476 "params": { 00:18:43.476 "period_us": 100000, 00:18:43.476 "enable": false 00:18:43.476 } 00:18:43.476 }, 00:18:43.476 { 00:18:43.476 "method": "bdev_enable_histogram", 00:18:43.476 "params": { 00:18:43.476 "name": "nvme0n1", 00:18:43.476 "enable": true 00:18:43.476 } 00:18:43.476 }, 00:18:43.476 { 00:18:43.476 "method": "bdev_wait_for_examine" 00:18:43.476 } 00:18:43.476 ] 00:18:43.476 }, 00:18:43.476 { 00:18:43.476 "subsystem": "nbd", 00:18:43.476 "config": [] 00:18:43.476 } 00:18:43.476 ] 00:18:43.476 }' 00:18:43.476 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:43.476 07:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.734 [2024-11-20 07:20:46.940833] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:43.734 [2024-11-20 07:20:46.940914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527035 ] 00:18:43.734 [2024-11-20 07:20:47.009224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.734 [2024-11-20 07:20:47.068363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.992 [2024-11-20 07:20:47.256909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.992 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:43.992 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:43.992 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.992 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:44.556 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.556 07:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.556 Running I/O for 1 seconds... 00:18:45.489 3426.00 IOPS, 13.38 MiB/s 00:18:45.489 Latency(us) 00:18:45.489 [2024-11-20T06:20:48.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.489 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:45.489 Verification LBA range: start 0x0 length 0x2000 00:18:45.489 nvme0n1 : 1.02 3490.78 13.64 0.00 0.00 36351.40 6456.51 32622.36 00:18:45.489 [2024-11-20T06:20:48.922Z] =================================================================================================================== 00:18:45.489 [2024-11-20T06:20:48.922Z] Total : 3490.78 13.64 0.00 0.00 36351.40 6456.51 32622.36 00:18:45.489 { 00:18:45.489 "results": [ 00:18:45.489 { 00:18:45.490 "job": "nvme0n1", 00:18:45.490 "core_mask": "0x2", 00:18:45.490 "workload": "verify", 00:18:45.490 "status": "finished", 00:18:45.490 "verify_range": { 00:18:45.490 "start": 0, 00:18:45.490 "length": 8192 00:18:45.490 }, 00:18:45.490 "queue_depth": 128, 00:18:45.490 "io_size": 4096, 00:18:45.490 "runtime": 1.018396, 00:18:45.490 "iops": 3490.783545889811, 00:18:45.490 "mibps": 13.635873226132075, 00:18:45.490 "io_failed": 0, 00:18:45.490 "io_timeout": 0, 00:18:45.490 "avg_latency_us": 36351.39937281867, 00:18:45.490 "min_latency_us": 6456.50962962963, 00:18:45.490 "max_latency_us": 32622.364444444444 00:18:45.490 } 00:18:45.490 ], 00:18:45.490 "core_count": 1 00:18:45.490 } 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:18:45.490 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:45.490 nvmf_trace.0 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2527035 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2527035 ']' 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2527035 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2527035 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2527035' 00:18:45.748 killing process with pid 2527035 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2527035 00:18:45.748 Received shutdown signal, test time was about 1.000000 seconds 00:18:45.748 00:18:45.748 Latency(us) 00:18:45.748 [2024-11-20T06:20:49.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.748 [2024-11-20T06:20:49.181Z] =================================================================================================================== 00:18:45.748 [2024-11-20T06:20:49.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.748 07:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2527035 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.005 rmmod nvme_tcp 00:18:46.005 rmmod nvme_fabrics 00:18:46.005 rmmod nvme_keyring 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2526884 ']' 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2526884 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2526884 ']' 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2526884 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2526884 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2526884' 00:18:46.005 killing process with pid 2526884 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2526884 00:18:46.005 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2526884 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.263 07:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.169 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:48.169 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8Q6b20twa9 /tmp/tmp.DsfkZqeVMP /tmp/tmp.yOPKBkQtAb 00:18:48.169 00:18:48.169 real 1m23.054s 00:18:48.169 user 2m20.302s 00:18:48.169 sys 0m24.362s 00:18:48.169 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:48.169 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.169 ************************************ 00:18:48.169 END TEST nvmf_tls 00:18:48.169 ************************************ 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.429 ************************************ 00:18:48.429 START TEST nvmf_fips 00:18:48.429 ************************************ 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:48.429 * Looking for test storage... 00:18:48.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.429 --rc genhtml_branch_coverage=1 00:18:48.429 --rc genhtml_function_coverage=1 00:18:48.429 --rc genhtml_legend=1 00:18:48.429 --rc geninfo_all_blocks=1 00:18:48.429 --rc geninfo_unexecuted_blocks=1 00:18:48.429 00:18:48.429 ' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.429 --rc genhtml_branch_coverage=1 00:18:48.429 --rc genhtml_function_coverage=1 00:18:48.429 --rc genhtml_legend=1 00:18:48.429 --rc geninfo_all_blocks=1 00:18:48.429 --rc geninfo_unexecuted_blocks=1 00:18:48.429 00:18:48.429 ' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.429 --rc genhtml_branch_coverage=1 00:18:48.429 --rc genhtml_function_coverage=1 00:18:48.429 --rc genhtml_legend=1 00:18:48.429 --rc geninfo_all_blocks=1 00:18:48.429 --rc geninfo_unexecuted_blocks=1 00:18:48.429 00:18:48.429 ' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:48.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.429 --rc genhtml_branch_coverage=1 00:18:48.429 --rc genhtml_function_coverage=1 00:18:48.429 --rc genhtml_legend=1 00:18:48.429 --rc geninfo_all_blocks=1 00:18:48.429 --rc geninfo_unexecuted_blocks=1 00:18:48.429 00:18:48.429 ' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.429 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:48.430 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:48.689 Error setting digest 00:18:48.689 40C2306DB97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:48.689 40C2306DB97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:48.689 07:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:50.594 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:50.594 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:50.594 Found net devices under 0000:09:00.0: cvl_0_0 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:50.594 Found net devices under 0000:09:00.1: cvl_0_1 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.594 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:50.595 07:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.595 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.595 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.595 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:50.595 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:50.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:18:50.595 00:18:50.595 --- 10.0.0.2 ping statistics --- 00:18:50.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.595 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:50.595 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:18:50.854 00:18:50.854 --- 10.0.0.1 ping statistics --- 00:18:50.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.854 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2529271 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2529271 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2529271 ']' 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:50.854 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.854 [2024-11-20 07:20:54.128729] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:50.854 [2024-11-20 07:20:54.128813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.854 [2024-11-20 07:20:54.199281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.854 [2024-11-20 07:20:54.255787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.854 [2024-11-20 07:20:54.255840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.854 [2024-11-20 07:20:54.255869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.854 [2024-11-20 07:20:54.255880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.854 [2024-11-20 07:20:54.255890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.854 [2024-11-20 07:20:54.256486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ojn 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ojn 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ojn 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ojn 00:18:51.114 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.406 [2024-11-20 07:20:54.676124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.406 [2024-11-20 07:20:54.692121] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.406 [2024-11-20 07:20:54.692386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.406 malloc0 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2529423 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2529423 /var/tmp/bdevperf.sock 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2529423 ']' 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.406 07:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.688 [2024-11-20 07:20:54.826884] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:18:51.688 [2024-11-20 07:20:54.826985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529423 ] 00:18:51.688 [2024-11-20 07:20:54.893071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.688 [2024-11-20 07:20:54.950470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.688 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.688 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:18:51.688 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ojn 00:18:51.946 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.205 [2024-11-20 07:20:55.604049] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.462 TLSTESTn1 00:18:52.462 07:20:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.462 Running I/O for 10 seconds... 00:18:54.769 2920.00 IOPS, 11.41 MiB/s [2024-11-20T06:20:59.137Z] 2978.50 IOPS, 11.63 MiB/s [2024-11-20T06:21:00.069Z] 2992.67 IOPS, 11.69 MiB/s [2024-11-20T06:21:01.002Z] 3007.00 IOPS, 11.75 MiB/s [2024-11-20T06:21:01.934Z] 2990.80 IOPS, 11.68 MiB/s [2024-11-20T06:21:02.866Z] 2998.17 IOPS, 11.71 MiB/s [2024-11-20T06:21:04.239Z] 3001.00 IOPS, 11.72 MiB/s [2024-11-20T06:21:05.172Z] 2999.25 IOPS, 11.72 MiB/s [2024-11-20T06:21:06.106Z] 2981.44 IOPS, 11.65 MiB/s [2024-11-20T06:21:06.106Z] 2990.80 IOPS, 11.68 MiB/s 00:19:02.673 Latency(us) 00:19:02.673 [2024-11-20T06:21:06.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.673 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.673 Verification LBA range: start 0x0 length 0x2000 00:19:02.673 TLSTESTn1 : 10.03 2994.09 11.70 0.00 0.00 42658.75 9563.40 71458.51 00:19:02.673 [2024-11-20T06:21:06.106Z] =================================================================================================================== 00:19:02.673 [2024-11-20T06:21:06.106Z] Total : 2994.09 11.70 0.00 0.00 42658.75 9563.40 71458.51 00:19:02.673 { 00:19:02.673 "results": [ 00:19:02.673 { 00:19:02.673 "job": "TLSTESTn1", 00:19:02.673 "core_mask": "0x4", 00:19:02.673 "workload": "verify", 00:19:02.673 "status": "finished", 00:19:02.673 "verify_range": { 00:19:02.673 "start": 0, 00:19:02.673 "length": 8192 00:19:02.673 }, 00:19:02.673 "queue_depth": 128, 00:19:02.673 "io_size": 4096, 00:19:02.673 "runtime": 10.031747, 00:19:02.673 "iops": 2994.094647721877, 00:19:02.673 "mibps": 11.695682217663583, 00:19:02.673 "io_failed": 0, 00:19:02.673 "io_timeout": 0, 00:19:02.673 "avg_latency_us": 42658.749345526114, 00:19:02.673 "min_latency_us": 9563.401481481482, 00:19:02.673 "max_latency_us": 71458.5125925926 00:19:02.673 } 00:19:02.673 ], 00:19:02.673 "core_count": 1 00:19:02.673 } 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:02.673 nvmf_trace.0 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2529423 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2529423 ']' 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2529423 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2529423 00:19:02.673 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:02.674 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:02.674 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2529423' 00:19:02.674 killing process with pid 2529423 00:19:02.674 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2529423 00:19:02.674 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.674 00:19:02.674 Latency(us) 00:19:02.674 [2024-11-20T06:21:06.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.674 [2024-11-20T06:21:06.107Z] =================================================================================================================== 00:19:02.674 [2024-11-20T06:21:06.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.674 07:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2529423 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:02.931 rmmod nvme_tcp 00:19:02.931 rmmod nvme_fabrics 00:19:02.931 rmmod nvme_keyring 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2529271 ']' 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2529271 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2529271 ']' 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2529271 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2529271 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2529271' 00:19:02.931 killing process with pid 2529271 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2529271 00:19:02.931 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2529271 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.190 07:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ojn 00:19:05.725 00:19:05.725 real 0m16.970s 00:19:05.725 user 0m19.188s 00:19:05.725 sys 0m6.761s 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.725 ************************************ 00:19:05.725 END TEST nvmf_fips 00:19:05.725 ************************************ 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.725 ************************************ 00:19:05.725 START TEST nvmf_control_msg_list 00:19:05.725 ************************************ 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:05.725 * Looking for test storage... 00:19:05.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.725 --rc genhtml_branch_coverage=1 00:19:05.725 --rc genhtml_function_coverage=1 00:19:05.725 --rc genhtml_legend=1 00:19:05.725 --rc geninfo_all_blocks=1 00:19:05.725 --rc geninfo_unexecuted_blocks=1 00:19:05.725 00:19:05.725 ' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.725 --rc genhtml_branch_coverage=1 00:19:05.725 --rc genhtml_function_coverage=1 00:19:05.725 --rc genhtml_legend=1 00:19:05.725 --rc geninfo_all_blocks=1 00:19:05.725 --rc geninfo_unexecuted_blocks=1 00:19:05.725 00:19:05.725 ' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.725 --rc genhtml_branch_coverage=1 00:19:05.725 --rc genhtml_function_coverage=1 00:19:05.725 --rc genhtml_legend=1 00:19:05.725 --rc geninfo_all_blocks=1 00:19:05.725 --rc geninfo_unexecuted_blocks=1 00:19:05.725 00:19:05.725 ' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.725 --rc genhtml_branch_coverage=1 00:19:05.725 --rc genhtml_function_coverage=1 00:19:05.725 --rc genhtml_legend=1 00:19:05.725 --rc geninfo_all_blocks=1 00:19:05.725 --rc geninfo_unexecuted_blocks=1 00:19:05.725 00:19:05.725 ' 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.725 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.726 07:21:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.624 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:07.625 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:07.625 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:07.625 Found net devices under 0000:09:00.0: cvl_0_0 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:07.625 Found net devices under 0000:09:00.1: cvl_0_1 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.625 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.626 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.626 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.626 07:21:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:19:07.626 00:19:07.626 --- 10.0.0.2 ping statistics --- 00:19:07.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.626 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:19:07.626 00:19:07.626 --- 10.0.0.1 ping statistics --- 00:19:07.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.626 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2533308 00:19:07.626 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2533308 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2533308 ']' 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.885 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:07.885 [2024-11-20 07:21:11.101554] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:19:07.885 [2024-11-20 07:21:11.101645] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.885 [2024-11-20 07:21:11.176995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.885 [2024-11-20 07:21:11.232473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.885 [2024-11-20 07:21:11.232527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.885 [2024-11-20 07:21:11.232556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.885 [2024-11-20 07:21:11.232567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.885 [2024-11-20 07:21:11.232576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.885 [2024-11-20 07:21:11.233146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.144 [2024-11-20 07:21:11.380465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.144 Malloc0 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.144 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.145 [2024-11-20 07:21:11.420462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2533336 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2533337 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2533338 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2533336 00:19:08.145 07:21:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:08.145 [2024-11-20 07:21:11.478969] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:08.145 [2024-11-20 07:21:11.489010] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:08.145 [2024-11-20 07:21:11.489294] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:09.518 Initializing NVMe Controllers 00:19:09.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:09.518 Initialization complete. Launching workers. 00:19:09.518 ======================================================== 00:19:09.518 Latency(us) 00:19:09.518 Device Information : IOPS MiB/s Average min max 00:19:09.518 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2891.00 11.29 345.47 152.51 635.56 00:19:09.518 ======================================================== 00:19:09.518 Total : 2891.00 11.29 345.47 152.51 635.56 00:19:09.518 00:19:09.518 Initializing NVMe Controllers 00:19:09.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:09.518 Initialization complete. Launching workers. 00:19:09.518 ======================================================== 00:19:09.518 Latency(us) 00:19:09.518 Device Information : IOPS MiB/s Average min max 00:19:09.518 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3011.00 11.76 331.67 184.69 635.52 00:19:09.518 ======================================================== 00:19:09.518 Total : 3011.00 11.76 331.67 184.69 635.52 00:19:09.518 00:19:09.518 Initializing NVMe Controllers 00:19:09.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:09.518 Initialization complete. Launching workers. 00:19:09.518 ======================================================== 00:19:09.518 Latency(us) 00:19:09.518 Device Information : IOPS MiB/s Average min max 00:19:09.518 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2349.00 9.18 425.36 184.74 718.64 00:19:09.518 ======================================================== 00:19:09.518 Total : 2349.00 9.18 425.36 184.74 718.64 00:19:09.518 00:19:09.518 [2024-11-20 07:21:12.622932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180fa10 is same with the state(6) to be set 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2533337 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2533338 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.518 rmmod nvme_tcp 00:19:09.518 rmmod nvme_fabrics 00:19:09.518 rmmod nvme_keyring 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2533308 ']' 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2533308 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2533308 ']' 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2533308 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.518 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2533308 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2533308' 00:19:09.519 killing process with pid 2533308 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2533308 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2533308 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.519 07:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.056 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.056 00:19:12.056 real 0m6.312s 00:19:12.056 user 0m5.496s 00:19:12.056 sys 0m2.652s 00:19:12.056 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:12.056 07:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.056 ************************************ 00:19:12.056 END TEST nvmf_control_msg_list 00:19:12.056 ************************************ 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.056 ************************************ 00:19:12.056 START TEST nvmf_wait_for_buf 00:19:12.056 ************************************ 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:12.056 * Looking for test storage... 00:19:12.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:12.056 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:12.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.057 --rc genhtml_branch_coverage=1 00:19:12.057 --rc genhtml_function_coverage=1 00:19:12.057 --rc genhtml_legend=1 00:19:12.057 --rc geninfo_all_blocks=1 00:19:12.057 --rc geninfo_unexecuted_blocks=1 00:19:12.057 00:19:12.057 ' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:12.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.057 --rc genhtml_branch_coverage=1 00:19:12.057 --rc genhtml_function_coverage=1 00:19:12.057 --rc genhtml_legend=1 00:19:12.057 --rc geninfo_all_blocks=1 00:19:12.057 --rc geninfo_unexecuted_blocks=1 00:19:12.057 00:19:12.057 ' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:12.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.057 --rc genhtml_branch_coverage=1 00:19:12.057 --rc genhtml_function_coverage=1 00:19:12.057 --rc genhtml_legend=1 00:19:12.057 --rc geninfo_all_blocks=1 00:19:12.057 --rc geninfo_unexecuted_blocks=1 00:19:12.057 00:19:12.057 ' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:12.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.057 --rc genhtml_branch_coverage=1 00:19:12.057 --rc genhtml_function_coverage=1 00:19:12.057 --rc genhtml_legend=1 00:19:12.057 --rc geninfo_all_blocks=1 00:19:12.057 --rc geninfo_unexecuted_blocks=1 00:19:12.057 00:19:12.057 ' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.057 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:13.959 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:14.218 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:14.218 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:14.218 Found net devices under 0000:09:00.0: cvl_0_0 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:14.218 Found net devices under 0000:09:00.1: cvl_0_1 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:19:14.218 00:19:14.218 --- 10.0.0.2 ping statistics --- 00:19:14.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.218 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:19:14.218 00:19:14.218 --- 10.0.0.1 ping statistics --- 00:19:14.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.218 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.218 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2535529 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2535529 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2535529 ']' 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.219 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.219 [2024-11-20 07:21:17.610726] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:19:14.219 [2024-11-20 07:21:17.610796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.478 [2024-11-20 07:21:17.698989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.478 [2024-11-20 07:21:17.776524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.478 [2024-11-20 07:21:17.776587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.478 [2024-11-20 07:21:17.776632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.478 [2024-11-20 07:21:17.776658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.478 [2024-11-20 07:21:17.776678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.478 [2024-11-20 07:21:17.777467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 Malloc0 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 [2024-11-20 07:21:18.090219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 [2024-11-20 07:21:18.114427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.737 07:21:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.995 [2024-11-20 07:21:18.195443] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:16.366 Initializing NVMe Controllers 00:19:16.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:16.366 Initialization complete. Launching workers. 00:19:16.366 ======================================================== 00:19:16.366 Latency(us) 00:19:16.366 Device Information : IOPS MiB/s Average min max 00:19:16.366 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 99.65 12.46 41577.54 23960.67 151600.66 00:19:16.366 ======================================================== 00:19:16.366 Total : 99.65 12.46 41577.54 23960.67 151600.66 00:19:16.366 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1574 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1574 -eq 0 ]] 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.366 rmmod nvme_tcp 00:19:16.366 rmmod nvme_fabrics 00:19:16.366 rmmod nvme_keyring 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2535529 ']' 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2535529 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2535529 ']' 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2535529 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2535529 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2535529' 00:19:16.366 killing process with pid 2535529 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2535529 00:19:16.366 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2535529 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.626 07:21:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.166 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:19.166 00:19:19.166 real 0m6.966s 00:19:19.166 user 0m3.408s 00:19:19.166 sys 0m2.112s 00:19:19.166 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:19.166 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:19.166 ************************************ 00:19:19.166 END TEST nvmf_wait_for_buf 00:19:19.166 ************************************ 00:19:19.166 07:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:19.166 07:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:19.166 07:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:19.166 07:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:19.166 07:21:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.166 07:21:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:21.068 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:21.068 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:21.068 Found net devices under 0000:09:00.0: cvl_0_0 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:21.068 Found net devices under 0000:09:00.1: cvl_0_1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.068 ************************************ 00:19:21.068 START TEST nvmf_perf_adq 00:19:21.068 ************************************ 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:21.068 * Looking for test storage... 00:19:21.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.068 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.069 --rc genhtml_branch_coverage=1 00:19:21.069 --rc genhtml_function_coverage=1 00:19:21.069 --rc genhtml_legend=1 00:19:21.069 --rc geninfo_all_blocks=1 00:19:21.069 --rc geninfo_unexecuted_blocks=1 00:19:21.069 00:19:21.069 ' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.069 --rc genhtml_branch_coverage=1 00:19:21.069 --rc genhtml_function_coverage=1 00:19:21.069 --rc genhtml_legend=1 00:19:21.069 --rc geninfo_all_blocks=1 00:19:21.069 --rc geninfo_unexecuted_blocks=1 00:19:21.069 00:19:21.069 ' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.069 --rc genhtml_branch_coverage=1 00:19:21.069 --rc genhtml_function_coverage=1 00:19:21.069 --rc genhtml_legend=1 00:19:21.069 --rc geninfo_all_blocks=1 00:19:21.069 --rc geninfo_unexecuted_blocks=1 00:19:21.069 00:19:21.069 ' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:21.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.069 --rc genhtml_branch_coverage=1 00:19:21.069 --rc genhtml_function_coverage=1 00:19:21.069 --rc genhtml_legend=1 00:19:21.069 --rc geninfo_all_blocks=1 00:19:21.069 --rc geninfo_unexecuted_blocks=1 00:19:21.069 00:19:21.069 ' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.069 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:23.034 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:23.034 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:23.034 Found net devices under 0000:09:00.0: cvl_0_0 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:23.034 Found net devices under 0000:09:00.1: cvl_0_1 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:23.034 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:23.970 07:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:25.871 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.163 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:31.164 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:31.164 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:31.164 Found net devices under 0000:09:00.0: cvl_0_0 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:31.164 Found net devices under 0000:09:00.1: cvl_0_1 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.164 07:21:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:31.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:19:31.164 00:19:31.164 --- 10.0.0.2 ping statistics --- 00:19:31.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.164 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:19:31.164 00:19:31.164 --- 10.0.0.1 ping statistics --- 00:19:31.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.164 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:31.164 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2540252 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2540252 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2540252 ']' 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 [2024-11-20 07:21:34.176182] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:19:31.165 [2024-11-20 07:21:34.176279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.165 [2024-11-20 07:21:34.247449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.165 [2024-11-20 07:21:34.307423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.165 [2024-11-20 07:21:34.307471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.165 [2024-11-20 07:21:34.307499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.165 [2024-11-20 07:21:34.307510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.165 [2024-11-20 07:21:34.307520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.165 [2024-11-20 07:21:34.309081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.165 [2024-11-20 07:21:34.309138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.165 [2024-11-20 07:21:34.309206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.165 [2024-11-20 07:21:34.309209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.165 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.165 [2024-11-20 07:21:34.591073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.423 Malloc1 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.423 [2024-11-20 07:21:34.655857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2540372 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:31.423 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:33.324 "tick_rate": 2700000000, 00:19:33.324 "poll_groups": [ 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_000", 00:19:33.324 "admin_qpairs": 1, 00:19:33.324 "io_qpairs": 1, 00:19:33.324 "current_admin_qpairs": 1, 00:19:33.324 "current_io_qpairs": 1, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 18904, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }, 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_001", 00:19:33.324 "admin_qpairs": 0, 00:19:33.324 "io_qpairs": 1, 00:19:33.324 "current_admin_qpairs": 0, 00:19:33.324 "current_io_qpairs": 1, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 19988, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }, 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_002", 00:19:33.324 "admin_qpairs": 0, 00:19:33.324 "io_qpairs": 1, 00:19:33.324 "current_admin_qpairs": 0, 00:19:33.324 "current_io_qpairs": 1, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 20189, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }, 00:19:33.324 { 00:19:33.324 "name": "nvmf_tgt_poll_group_003", 00:19:33.324 "admin_qpairs": 0, 00:19:33.324 "io_qpairs": 1, 00:19:33.324 "current_admin_qpairs": 0, 00:19:33.324 "current_io_qpairs": 1, 00:19:33.324 "pending_bdev_io": 0, 00:19:33.324 "completed_nvme_io": 19841, 00:19:33.324 "transports": [ 00:19:33.324 { 00:19:33.324 "trtype": "TCP" 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 } 00:19:33.324 ] 00:19:33.324 }' 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:33.324 07:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2540372 00:19:41.427 Initializing NVMe Controllers 00:19:41.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:41.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:41.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:41.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:41.427 Initialization complete. Launching workers. 00:19:41.427 ======================================================== 00:19:41.427 Latency(us) 00:19:41.427 Device Information : IOPS MiB/s Average min max 00:19:41.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10650.80 41.60 6010.12 2540.16 10067.54 00:19:41.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10478.00 40.93 6109.52 2486.36 9607.33 00:19:41.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10447.50 40.81 6125.73 2616.95 10351.94 00:19:41.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10001.00 39.07 6399.16 2314.19 11593.44 00:19:41.427 ======================================================== 00:19:41.427 Total : 41577.29 162.41 6157.80 2314.19 11593.44 00:19:41.427 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.427 rmmod nvme_tcp 00:19:41.427 rmmod nvme_fabrics 00:19:41.427 rmmod nvme_keyring 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:41.427 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2540252 ']' 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2540252 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2540252 ']' 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2540252 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2540252 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2540252' 00:19:41.685 killing process with pid 2540252 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2540252 00:19:41.685 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2540252 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.944 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.849 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:43.849 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:43.849 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:43.849 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:44.416 07:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:46.946 07:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.223 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:52.224 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:52.224 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:52.224 Found net devices under 0000:09:00.0: cvl_0_0 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:52.224 Found net devices under 0000:09:00.1: cvl_0_1 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.224 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:52.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:19:52.225 00:19:52.225 --- 10.0.0.2 ping statistics --- 00:19:52.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.225 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:19:52.225 00:19:52.225 --- 10.0.0.1 ping statistics --- 00:19:52.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.225 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:52.225 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:52.225 net.core.busy_poll = 1 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:52.225 net.core.busy_read = 1 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2542902 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2542902 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2542902 ']' 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.225 [2024-11-20 07:21:55.164414] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:19:52.225 [2024-11-20 07:21:55.164502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.225 [2024-11-20 07:21:55.240747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.225 [2024-11-20 07:21:55.300845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.225 [2024-11-20 07:21:55.300899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.225 [2024-11-20 07:21:55.300923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.225 [2024-11-20 07:21:55.300934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.225 [2024-11-20 07:21:55.300943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.225 [2024-11-20 07:21:55.302548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.225 [2024-11-20 07:21:55.302582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.225 [2024-11-20 07:21:55.302621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.225 [2024-11-20 07:21:55.302623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.225 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.226 [2024-11-20 07:21:55.569545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.226 Malloc1 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.226 [2024-11-20 07:21:55.636629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2543045 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:52.226 07:21:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:54.754 "tick_rate": 2700000000, 00:19:54.754 "poll_groups": [ 00:19:54.754 { 00:19:54.754 "name": "nvmf_tgt_poll_group_000", 00:19:54.754 "admin_qpairs": 1, 00:19:54.754 "io_qpairs": 3, 00:19:54.754 "current_admin_qpairs": 1, 00:19:54.754 "current_io_qpairs": 3, 00:19:54.754 "pending_bdev_io": 0, 00:19:54.754 "completed_nvme_io": 27214, 00:19:54.754 "transports": [ 00:19:54.754 { 00:19:54.754 "trtype": "TCP" 00:19:54.754 } 00:19:54.754 ] 00:19:54.754 }, 00:19:54.754 { 00:19:54.754 "name": "nvmf_tgt_poll_group_001", 00:19:54.754 "admin_qpairs": 0, 00:19:54.754 "io_qpairs": 1, 00:19:54.754 "current_admin_qpairs": 0, 00:19:54.754 "current_io_qpairs": 1, 00:19:54.754 "pending_bdev_io": 0, 00:19:54.754 "completed_nvme_io": 25143, 00:19:54.754 "transports": [ 00:19:54.754 { 00:19:54.754 "trtype": "TCP" 00:19:54.754 } 00:19:54.754 ] 00:19:54.754 }, 00:19:54.754 { 00:19:54.754 "name": "nvmf_tgt_poll_group_002", 00:19:54.754 "admin_qpairs": 0, 00:19:54.754 "io_qpairs": 0, 00:19:54.754 "current_admin_qpairs": 0, 00:19:54.754 "current_io_qpairs": 0, 00:19:54.754 "pending_bdev_io": 0, 00:19:54.754 "completed_nvme_io": 0, 00:19:54.754 "transports": [ 00:19:54.754 { 00:19:54.754 "trtype": "TCP" 00:19:54.754 } 00:19:54.754 ] 00:19:54.754 }, 00:19:54.754 { 00:19:54.754 "name": "nvmf_tgt_poll_group_003", 00:19:54.754 "admin_qpairs": 0, 00:19:54.754 "io_qpairs": 0, 00:19:54.754 "current_admin_qpairs": 0, 00:19:54.754 "current_io_qpairs": 0, 00:19:54.754 "pending_bdev_io": 0, 00:19:54.754 "completed_nvme_io": 0, 00:19:54.754 "transports": [ 00:19:54.754 { 00:19:54.754 "trtype": "TCP" 00:19:54.754 } 00:19:54.754 ] 00:19:54.754 } 00:19:54.754 ] 00:19:54.754 }' 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:54.754 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2543045 00:20:02.865 Initializing NVMe Controllers 00:20:02.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:02.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:02.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:02.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:02.865 Initialization complete. Launching workers. 00:20:02.865 ======================================================== 00:20:02.865 Latency(us) 00:20:02.865 Device Information : IOPS MiB/s Average min max 00:20:02.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13263.80 51.81 4825.03 1760.22 46488.89 00:20:02.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3768.80 14.72 17048.83 2796.50 62683.91 00:20:02.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4828.50 18.86 13262.05 2090.67 60428.89 00:20:02.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5308.70 20.74 12100.43 2158.46 60787.33 00:20:02.865 ======================================================== 00:20:02.865 Total : 27169.80 106.13 9441.56 1760.22 62683.91 00:20:02.865 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.865 rmmod nvme_tcp 00:20:02.865 rmmod nvme_fabrics 00:20:02.865 rmmod nvme_keyring 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2542902 ']' 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2542902 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2542902 ']' 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2542902 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2542902 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2542902' 00:20:02.865 killing process with pid 2542902 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2542902 00:20:02.865 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2542902 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:02.865 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.866 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:02.866 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.866 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.866 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.771 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:04.771 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:04.771 00:20:04.771 real 0m44.016s 00:20:04.771 user 2m39.601s 00:20:04.771 sys 0m9.611s 00:20:04.771 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:04.771 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.771 ************************************ 00:20:04.771 END TEST nvmf_perf_adq 00:20:04.771 ************************************ 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.030 ************************************ 00:20:05.030 START TEST nvmf_shutdown 00:20:05.030 ************************************ 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:05.030 * Looking for test storage... 00:20:05.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.030 --rc genhtml_branch_coverage=1 00:20:05.030 --rc genhtml_function_coverage=1 00:20:05.030 --rc genhtml_legend=1 00:20:05.030 --rc geninfo_all_blocks=1 00:20:05.030 --rc geninfo_unexecuted_blocks=1 00:20:05.030 00:20:05.030 ' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.030 --rc genhtml_branch_coverage=1 00:20:05.030 --rc genhtml_function_coverage=1 00:20:05.030 --rc genhtml_legend=1 00:20:05.030 --rc geninfo_all_blocks=1 00:20:05.030 --rc geninfo_unexecuted_blocks=1 00:20:05.030 00:20:05.030 ' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.030 --rc genhtml_branch_coverage=1 00:20:05.030 --rc genhtml_function_coverage=1 00:20:05.030 --rc genhtml_legend=1 00:20:05.030 --rc geninfo_all_blocks=1 00:20:05.030 --rc geninfo_unexecuted_blocks=1 00:20:05.030 00:20:05.030 ' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.030 --rc genhtml_branch_coverage=1 00:20:05.030 --rc genhtml_function_coverage=1 00:20:05.030 --rc genhtml_legend=1 00:20:05.030 --rc geninfo_all_blocks=1 00:20:05.030 --rc geninfo_unexecuted_blocks=1 00:20:05.030 00:20:05.030 ' 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.030 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:05.031 ************************************ 00:20:05.031 START TEST nvmf_shutdown_tc1 00:20:05.031 ************************************ 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.031 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:07.630 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:07.630 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:07.630 Found net devices under 0000:09:00.0: cvl_0_0 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.630 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:07.631 Found net devices under 0000:09:00.1: cvl_0_1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:07.631 00:20:07.631 --- 10.0.0.2 ping statistics --- 00:20:07.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.631 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:20:07.631 00:20:07.631 --- 10.0.0.1 ping statistics --- 00:20:07.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.631 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2546220 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2546220 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2546220 ']' 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.631 [2024-11-20 07:22:10.722049] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:07.631 [2024-11-20 07:22:10.722124] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.631 [2024-11-20 07:22:10.794126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.631 [2024-11-20 07:22:10.852110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.631 [2024-11-20 07:22:10.852160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.631 [2024-11-20 07:22:10.852189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.631 [2024-11-20 07:22:10.852200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.631 [2024-11-20 07:22:10.852209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.631 [2024-11-20 07:22:10.853922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.631 [2024-11-20 07:22:10.854061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.631 [2024-11-20 07:22:10.854231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.631 [2024-11-20 07:22:10.854236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.631 07:22:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.631 [2024-11-20 07:22:11.004058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.631 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.632 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.890 Malloc1 00:20:07.890 [2024-11-20 07:22:11.110919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.890 Malloc2 00:20:07.890 Malloc3 00:20:07.890 Malloc4 00:20:07.890 Malloc5 00:20:08.148 Malloc6 00:20:08.148 Malloc7 00:20:08.148 Malloc8 00:20:08.148 Malloc9 00:20:08.148 Malloc10 00:20:08.148 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.148 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:08.148 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.148 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.407 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2546393 00:20:08.407 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2546393 /var/tmp/bdevperf.sock 00:20:08.407 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2546393 ']' 00:20:08.407 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.408 "trsvcid": "$NVMF_PORT", 00:20:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.408 "hdgst": ${hdgst:-false}, 00:20:08.408 "ddgst": ${ddgst:-false} 00:20:08.408 }, 00:20:08.408 "method": "bdev_nvme_attach_controller" 00:20:08.408 } 00:20:08.408 EOF 00:20:08.408 )") 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.408 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.408 { 00:20:08.408 "params": { 00:20:08.408 "name": "Nvme$subsystem", 00:20:08.408 "trtype": "$TEST_TRANSPORT", 00:20:08.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.408 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "$NVMF_PORT", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.409 "hdgst": ${hdgst:-false}, 00:20:08.409 "ddgst": ${ddgst:-false} 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 } 00:20:08.409 EOF 00:20:08.409 )") 00:20:08.409 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.409 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:08.409 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:08.409 07:22:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme1", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme2", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme3", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme4", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme5", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme6", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme7", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme8", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme9", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 },{ 00:20:08.409 "params": { 00:20:08.409 "name": "Nvme10", 00:20:08.409 "trtype": "tcp", 00:20:08.409 "traddr": "10.0.0.2", 00:20:08.409 "adrfam": "ipv4", 00:20:08.409 "trsvcid": "4420", 00:20:08.409 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:08.409 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:08.409 "hdgst": false, 00:20:08.409 "ddgst": false 00:20:08.409 }, 00:20:08.409 "method": "bdev_nvme_attach_controller" 00:20:08.409 }' 00:20:08.409 [2024-11-20 07:22:11.642844] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:08.409 [2024-11-20 07:22:11.642924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:08.409 [2024-11-20 07:22:11.715980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.409 [2024-11-20 07:22:11.776768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2546393 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:10.308 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:11.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2546393 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2546220 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.682 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.682 { 00:20:11.682 "params": { 00:20:11.682 "name": "Nvme$subsystem", 00:20:11.682 "trtype": "$TEST_TRANSPORT", 00:20:11.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.682 "adrfam": "ipv4", 00:20:11.682 "trsvcid": "$NVMF_PORT", 00:20:11.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.682 "hdgst": ${hdgst:-false}, 00:20:11.682 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.683 { 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme$subsystem", 00:20:11.683 "trtype": "$TEST_TRANSPORT", 00:20:11.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "$NVMF_PORT", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.683 "hdgst": ${hdgst:-false}, 00:20:11.683 "ddgst": ${ddgst:-false} 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 } 00:20:11.683 EOF 00:20:11.683 )") 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:11.683 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme1", 00:20:11.683 "trtype": "tcp", 00:20:11.683 "traddr": "10.0.0.2", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "4420", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.683 "hdgst": false, 00:20:11.683 "ddgst": false 00:20:11.683 }, 00:20:11.683 "method": "bdev_nvme_attach_controller" 00:20:11.683 },{ 00:20:11.683 "params": { 00:20:11.683 "name": "Nvme2", 00:20:11.683 "trtype": "tcp", 00:20:11.683 "traddr": "10.0.0.2", 00:20:11.683 "adrfam": "ipv4", 00:20:11.683 "trsvcid": "4420", 00:20:11.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.683 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme3", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme4", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme5", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme6", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme7", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme8", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme9", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 },{ 00:20:11.684 "params": { 00:20:11.684 "name": "Nvme10", 00:20:11.684 "trtype": "tcp", 00:20:11.684 "traddr": "10.0.0.2", 00:20:11.684 "adrfam": "ipv4", 00:20:11.684 "trsvcid": "4420", 00:20:11.684 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:11.684 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:11.684 "hdgst": false, 00:20:11.684 "ddgst": false 00:20:11.684 }, 00:20:11.684 "method": "bdev_nvme_attach_controller" 00:20:11.684 }' 00:20:11.684 [2024-11-20 07:22:14.761435] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:11.684 [2024-11-20 07:22:14.761525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546755 ] 00:20:11.684 [2024-11-20 07:22:14.835983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.684 [2024-11-20 07:22:14.898839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.057 Running I/O for 1 seconds... 00:20:14.248 1739.00 IOPS, 108.69 MiB/s 00:20:14.248 Latency(us) 00:20:14.248 [2024-11-20T06:22:17.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.248 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.248 Verification LBA range: start 0x0 length 0x400 00:20:14.248 Nvme1n1 : 1.15 222.83 13.93 0.00 0.00 284491.47 18350.08 260978.92 00:20:14.248 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.248 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme2n1 : 1.14 223.84 13.99 0.00 0.00 278077.25 19612.25 242337.56 00:20:14.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme3n1 : 1.14 225.35 14.08 0.00 0.00 271939.13 18835.53 262532.36 00:20:14.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme4n1 : 1.16 224.32 14.02 0.00 0.00 268429.85 4029.25 260978.92 00:20:14.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme5n1 : 1.17 219.48 13.72 0.00 0.00 270388.53 20097.71 262532.36 00:20:14.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme6n1 : 1.17 218.28 13.64 0.00 0.00 267444.34 21456.97 279620.27 00:20:14.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme7n1 : 1.15 221.92 13.87 0.00 0.00 258107.16 46020.84 242337.56 00:20:14.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme8n1 : 1.18 270.74 16.92 0.00 0.00 208153.87 16311.18 273406.48 00:20:14.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme9n1 : 1.18 217.08 13.57 0.00 0.00 255515.12 20097.71 274959.93 00:20:14.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.249 Verification LBA range: start 0x0 length 0x400 00:20:14.249 Nvme10n1 : 1.18 222.31 13.89 0.00 0.00 244037.38 8592.50 284280.60 00:20:14.249 [2024-11-20T06:22:17.682Z] =================================================================================================================== 00:20:14.249 [2024-11-20T06:22:17.682Z] Total : 2266.15 141.63 0.00 0.00 259350.80 4029.25 284280.60 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.507 rmmod nvme_tcp 00:20:14.507 rmmod nvme_fabrics 00:20:14.507 rmmod nvme_keyring 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2546220 ']' 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2546220 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2546220 ']' 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2546220 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2546220 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2546220' 00:20:14.507 killing process with pid 2546220 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2546220 00:20:14.507 07:22:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2546220 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.073 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.074 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.980 00:20:16.980 real 0m11.936s 00:20:16.980 user 0m34.698s 00:20:16.980 sys 0m3.251s 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:16.980 ************************************ 00:20:16.980 END TEST nvmf_shutdown_tc1 00:20:16.980 ************************************ 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:16.980 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:17.239 ************************************ 00:20:17.239 START TEST nvmf_shutdown_tc2 00:20:17.239 ************************************ 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:17.239 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:17.240 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:17.240 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:17.240 Found net devices under 0000:09:00.0: cvl_0_0 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:17.240 Found net devices under 0000:09:00.1: cvl_0_1 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:17.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:20:17.240 00:20:17.240 --- 10.0.0.2 ping statistics --- 00:20:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.240 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:20:17.240 00:20:17.240 --- 10.0.0.1 ping statistics --- 00:20:17.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.240 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:17.240 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2547585 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2547585 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2547585 ']' 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:17.241 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.241 [2024-11-20 07:22:20.651945] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:17.241 [2024-11-20 07:22:20.652029] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.499 [2024-11-20 07:22:20.735867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.499 [2024-11-20 07:22:20.805664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.499 [2024-11-20 07:22:20.805733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.499 [2024-11-20 07:22:20.805753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.499 [2024-11-20 07:22:20.805769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.499 [2024-11-20 07:22:20.805797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.499 [2024-11-20 07:22:20.807718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.499 [2024-11-20 07:22:20.807780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.499 [2024-11-20 07:22:20.807854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.499 [2024-11-20 07:22:20.807846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.758 [2024-11-20 07:22:20.968095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.758 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.758 Malloc1 00:20:17.758 [2024-11-20 07:22:21.072735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.758 Malloc2 00:20:17.758 Malloc3 00:20:18.017 Malloc4 00:20:18.017 Malloc5 00:20:18.017 Malloc6 00:20:18.017 Malloc7 00:20:18.017 Malloc8 00:20:18.276 Malloc9 00:20:18.276 Malloc10 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2547681 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2547681 /var/tmp/bdevperf.sock 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2547681 ']' 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.276 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.276 { 00:20:18.276 "params": { 00:20:18.276 "name": "Nvme$subsystem", 00:20:18.276 "trtype": "$TEST_TRANSPORT", 00:20:18.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.276 "adrfam": "ipv4", 00:20:18.276 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.277 "hdgst": ${hdgst:-false}, 00:20:18.277 "ddgst": ${ddgst:-false} 00:20:18.277 }, 00:20:18.277 "method": "bdev_nvme_attach_controller" 00:20:18.277 } 00:20:18.277 EOF 00:20:18.277 )") 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.277 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.277 { 00:20:18.277 "params": { 00:20:18.277 "name": "Nvme$subsystem", 00:20:18.277 "trtype": "$TEST_TRANSPORT", 00:20:18.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "$NVMF_PORT", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.278 "hdgst": ${hdgst:-false}, 00:20:18.278 "ddgst": ${ddgst:-false} 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 } 00:20:18.278 EOF 00:20:18.278 )") 00:20:18.278 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.278 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:18.278 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:18.278 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme1", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme2", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme3", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme4", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme5", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme6", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme7", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme8", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme9", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 },{ 00:20:18.278 "params": { 00:20:18.278 "name": "Nvme10", 00:20:18.278 "trtype": "tcp", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "adrfam": "ipv4", 00:20:18.278 "trsvcid": "4420", 00:20:18.278 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:18.278 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:18.278 "hdgst": false, 00:20:18.278 "ddgst": false 00:20:18.278 }, 00:20:18.278 "method": "bdev_nvme_attach_controller" 00:20:18.278 }' 00:20:18.278 [2024-11-20 07:22:21.616168] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:18.278 [2024-11-20 07:22:21.616266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547681 ] 00:20:18.278 [2024-11-20 07:22:21.692258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.537 [2024-11-20 07:22:21.752881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.910 Running I/O for 10 seconds... 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:20.474 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:20.732 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:20.732 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:20.732 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2547681 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2547681 ']' 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2547681 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.733 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2547681 00:20:20.733 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.733 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.733 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2547681' 00:20:20.733 killing process with pid 2547681 00:20:20.733 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2547681 00:20:20.733 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2547681 00:20:20.733 2002.00 IOPS, 125.12 MiB/s [2024-11-20T06:22:24.166Z] Received shutdown signal, test time was about 1.040757 seconds 00:20:20.733 00:20:20.733 Latency(us) 00:20:20.733 [2024-11-20T06:22:24.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.733 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme1n1 : 1.04 246.17 15.39 0.00 0.00 256198.16 18641.35 262532.36 00:20:20.733 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme2n1 : 0.98 196.55 12.28 0.00 0.00 315918.92 35340.89 242337.56 00:20:20.733 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme3n1 : 1.04 247.01 15.44 0.00 0.00 247142.59 20194.80 257872.02 00:20:20.733 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme4n1 : 1.02 256.59 16.04 0.00 0.00 228818.01 19515.16 253211.69 00:20:20.733 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme5n1 : 1.03 254.02 15.88 0.00 0.00 229984.54 5048.70 254765.13 00:20:20.733 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme6n1 : 1.03 249.72 15.61 0.00 0.00 230571.99 18447.17 254765.13 00:20:20.733 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme7n1 : 0.98 196.06 12.25 0.00 0.00 286153.13 18835.53 256318.58 00:20:20.733 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme8n1 : 1.03 251.77 15.74 0.00 0.00 219909.22 1601.99 257872.02 00:20:20.733 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme9n1 : 1.00 196.19 12.26 0.00 0.00 272914.72 4563.25 273406.48 00:20:20.733 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.733 Verification LBA range: start 0x0 length 0x400 00:20:20.733 Nvme10n1 : 0.99 193.37 12.09 0.00 0.00 273030.00 20000.62 281173.71 00:20:20.733 [2024-11-20T06:22:24.166Z] =================================================================================================================== 00:20:20.733 [2024-11-20T06:22:24.166Z] Total : 2287.45 142.97 0.00 0.00 252504.91 1601.99 281173.71 00:20:20.991 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2547585 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.364 rmmod nvme_tcp 00:20:22.364 rmmod nvme_fabrics 00:20:22.364 rmmod nvme_keyring 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2547585 ']' 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2547585 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2547585 ']' 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2547585 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2547585 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2547585' 00:20:22.364 killing process with pid 2547585 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2547585 00:20:22.364 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2547585 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.623 07:22:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:25.161 00:20:25.161 real 0m7.616s 00:20:25.161 user 0m22.993s 00:20:25.161 sys 0m1.488s 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:25.161 ************************************ 00:20:25.161 END TEST nvmf_shutdown_tc2 00:20:25.161 ************************************ 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:25.161 ************************************ 00:20:25.161 START TEST nvmf_shutdown_tc3 00:20:25.161 ************************************ 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.161 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:25.162 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:25.162 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:25.162 Found net devices under 0000:09:00.0: cvl_0_0 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:25.162 Found net devices under 0000:09:00.1: cvl_0_1 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:20:25.162 00:20:25.162 --- 10.0.0.2 ping statistics --- 00:20:25.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.162 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:20:25.162 00:20:25.162 --- 10.0.0.1 ping statistics --- 00:20:25.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.162 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.162 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2548554 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2548554 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2548554 ']' 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.163 [2024-11-20 07:22:28.317089] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:25.163 [2024-11-20 07:22:28.317160] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.163 [2024-11-20 07:22:28.390410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.163 [2024-11-20 07:22:28.449590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.163 [2024-11-20 07:22:28.449658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.163 [2024-11-20 07:22:28.449694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.163 [2024-11-20 07:22:28.449706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.163 [2024-11-20 07:22:28.449716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.163 [2024-11-20 07:22:28.451269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.163 [2024-11-20 07:22:28.451373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:25.163 [2024-11-20 07:22:28.451338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.163 [2024-11-20 07:22:28.451390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.163 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 [2024-11-20 07:22:28.608230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.421 07:22:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.421 Malloc1 00:20:25.421 [2024-11-20 07:22:28.716155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.421 Malloc2 00:20:25.421 Malloc3 00:20:25.421 Malloc4 00:20:25.679 Malloc5 00:20:25.679 Malloc6 00:20:25.679 Malloc7 00:20:25.679 Malloc8 00:20:25.679 Malloc9 00:20:25.939 Malloc10 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2548731 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2548731 /var/tmp/bdevperf.sock 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2548731 ']' 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.939 { 00:20:25.939 "params": { 00:20:25.939 "name": "Nvme$subsystem", 00:20:25.939 "trtype": "$TEST_TRANSPORT", 00:20:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.939 "adrfam": "ipv4", 00:20:25.939 "trsvcid": "$NVMF_PORT", 00:20:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.939 "hdgst": ${hdgst:-false}, 00:20:25.939 "ddgst": ${ddgst:-false} 00:20:25.939 }, 00:20:25.939 "method": "bdev_nvme_attach_controller" 00:20:25.939 } 00:20:25.939 EOF 00:20:25.939 )") 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.939 { 00:20:25.939 "params": { 00:20:25.939 "name": "Nvme$subsystem", 00:20:25.939 "trtype": "$TEST_TRANSPORT", 00:20:25.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.939 "adrfam": "ipv4", 00:20:25.939 "trsvcid": "$NVMF_PORT", 00:20:25.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.939 "hdgst": ${hdgst:-false}, 00:20:25.939 "ddgst": ${ddgst:-false} 00:20:25.939 }, 00:20:25.939 "method": "bdev_nvme_attach_controller" 00:20:25.939 } 00:20:25.939 EOF 00:20:25.939 )") 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.939 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.939 { 00:20:25.939 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.940 { 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme$subsystem", 00:20:25.940 "trtype": "$TEST_TRANSPORT", 00:20:25.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "$NVMF_PORT", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.940 "hdgst": ${hdgst:-false}, 00:20:25.940 "ddgst": ${ddgst:-false} 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 } 00:20:25.940 EOF 00:20:25.940 )") 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:25.940 07:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme1", 00:20:25.940 "trtype": "tcp", 00:20:25.940 "traddr": "10.0.0.2", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "4420", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.940 "hdgst": false, 00:20:25.940 "ddgst": false 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 },{ 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme2", 00:20:25.940 "trtype": "tcp", 00:20:25.940 "traddr": "10.0.0.2", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "4420", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:25.940 "hdgst": false, 00:20:25.940 "ddgst": false 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 },{ 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme3", 00:20:25.940 "trtype": "tcp", 00:20:25.940 "traddr": "10.0.0.2", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "4420", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:25.940 "hdgst": false, 00:20:25.940 "ddgst": false 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 },{ 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme4", 00:20:25.940 "trtype": "tcp", 00:20:25.940 "traddr": "10.0.0.2", 00:20:25.940 "adrfam": "ipv4", 00:20:25.940 "trsvcid": "4420", 00:20:25.940 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:25.940 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:25.940 "hdgst": false, 00:20:25.940 "ddgst": false 00:20:25.940 }, 00:20:25.940 "method": "bdev_nvme_attach_controller" 00:20:25.940 },{ 00:20:25.940 "params": { 00:20:25.940 "name": "Nvme5", 00:20:25.940 "trtype": "tcp", 00:20:25.941 "traddr": "10.0.0.2", 00:20:25.941 "adrfam": "ipv4", 00:20:25.941 "trsvcid": "4420", 00:20:25.941 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:25.941 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:25.941 "hdgst": false, 00:20:25.941 "ddgst": false 00:20:25.941 }, 00:20:25.941 "method": "bdev_nvme_attach_controller" 00:20:25.941 },{ 00:20:25.941 "params": { 00:20:25.941 "name": "Nvme6", 00:20:25.941 "trtype": "tcp", 00:20:25.941 "traddr": "10.0.0.2", 00:20:25.941 "adrfam": "ipv4", 00:20:25.941 "trsvcid": "4420", 00:20:25.941 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:25.941 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:25.941 "hdgst": false, 00:20:25.941 "ddgst": false 00:20:25.941 }, 00:20:25.941 "method": "bdev_nvme_attach_controller" 00:20:25.941 },{ 00:20:25.941 "params": { 00:20:25.941 "name": "Nvme7", 00:20:25.941 "trtype": "tcp", 00:20:25.941 "traddr": "10.0.0.2", 00:20:25.941 "adrfam": "ipv4", 00:20:25.941 "trsvcid": "4420", 00:20:25.941 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:25.941 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:25.941 "hdgst": false, 00:20:25.941 "ddgst": false 00:20:25.941 }, 00:20:25.941 "method": "bdev_nvme_attach_controller" 00:20:25.941 },{ 00:20:25.941 "params": { 00:20:25.941 "name": "Nvme8", 00:20:25.941 "trtype": "tcp", 00:20:25.941 "traddr": "10.0.0.2", 00:20:25.941 "adrfam": "ipv4", 00:20:25.941 "trsvcid": "4420", 00:20:25.941 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:25.941 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:25.941 "hdgst": false, 00:20:25.941 "ddgst": false 00:20:25.941 }, 00:20:25.941 "method": "bdev_nvme_attach_controller" 00:20:25.941 },{ 00:20:25.941 "params": { 00:20:25.941 "name": "Nvme9", 00:20:25.941 "trtype": "tcp", 00:20:25.941 "traddr": "10.0.0.2", 00:20:25.941 "adrfam": "ipv4", 00:20:25.941 "trsvcid": "4420", 00:20:25.941 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:25.941 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:25.941 "hdgst": false, 00:20:25.941 "ddgst": false 00:20:25.941 }, 00:20:25.941 "method": "bdev_nvme_attach_controller" 00:20:25.941 },{ 00:20:25.941 "params": { 00:20:25.941 "name": "Nvme10", 00:20:25.941 "trtype": "tcp", 00:20:25.941 "traddr": "10.0.0.2", 00:20:25.941 "adrfam": "ipv4", 00:20:25.941 "trsvcid": "4420", 00:20:25.941 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:25.941 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:25.941 "hdgst": false, 00:20:25.941 "ddgst": false 00:20:25.941 }, 00:20:25.941 "method": "bdev_nvme_attach_controller" 00:20:25.941 }' 00:20:25.941 [2024-11-20 07:22:29.245482] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:25.941 [2024-11-20 07:22:29.245565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548731 ] 00:20:25.941 [2024-11-20 07:22:29.316748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.199 [2024-11-20 07:22:29.377439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.095 Running I/O for 10 seconds... 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=14 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 14 -ge 100 ']' 00:20:28.095 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=82 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 82 -ge 100 ']' 00:20:28.352 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=149 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 149 -ge 100 ']' 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2548554 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2548554 ']' 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2548554 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2548554 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2548554' 00:20:28.624 killing process with pid 2548554 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2548554 00:20:28.624 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2548554 00:20:28.624 [2024-11-20 07:22:31.964595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.964997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.965009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.965021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.965033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.624 [2024-11-20 07:22:31.965044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.965512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5640 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.625 [2024-11-20 07:22:31.967863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.967996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.968008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.968020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.968032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.968044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.968056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.968067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadf20 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.969992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.970524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5b30 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.972179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.972216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.972237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.626 [2024-11-20 07:22:31.972250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.627 [2024-11-20 07:22:31.972773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.972992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.973004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.973016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6000 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.688 [2024-11-20 07:22:31.974855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.974997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.975114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6380 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.976641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6d20 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.976668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6d20 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.976682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6d20 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.976700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd6d20 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.977959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.977987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.978737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x965c90 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.689 [2024-11-20 07:22:31.980414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.980789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x966630 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.984049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6366e0 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.984266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6040 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.984446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4c0 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.984607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59f110 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.984783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75670 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.984943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.984972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.984992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62c90 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.985117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633890 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.985291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b6e0 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.985474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6376f0 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.985637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.690 [2024-11-20 07:22:31.985742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.985754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57f40 is same with the state(6) to be set 00:20:28.690 [2024-11-20 07:22:31.986470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.986521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.986552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.986582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.986612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.986642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.690 [2024-11-20 07:22:31.986671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.690 [2024-11-20 07:22:31.986685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.986980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.987976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.987991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.691 [2024-11-20 07:22:31.988254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.691 [2024-11-20 07:22:31.988268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.692 [2024-11-20 07:22:31.988868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.988988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.989979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.989994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.990023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.990036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.990052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.990065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.990081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.990096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.990112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.692 [2024-11-20 07:22:31.990125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.692 [2024-11-20 07:22:31.990141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.990820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.990855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:28.693 [2024-11-20 07:22:31.991068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.693 [2024-11-20 07:22:31.991734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.693 [2024-11-20 07:22:31.991748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.991977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.991990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.992982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.992996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.993012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.993026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.694 [2024-11-20 07:22:31.993041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.694 [2024-11-20 07:22:31.993055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:31.993070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3ce50 is same with the state(6) to be set 00:20:28.695 [2024-11-20 07:22:31.997200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:28.695 [2024-11-20 07:22:31.997249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:28.695 [2024-11-20 07:22:31.997278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59f110 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa62c90 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6366e0 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6040 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaae4c0 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa75670 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x633890 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62b6e0 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6376f0 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.997563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa57f40 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:31.998154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:28.695 [2024-11-20 07:22:31.999476] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:31.999567] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:31.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.695 [2024-11-20 07:22:31.999774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa62c90 with addr=10.0.0.2, port=4420 00:20:28.695 [2024-11-20 07:22:31.999793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62c90 is same with the state(6) to be set 00:20:28.695 [2024-11-20 07:22:31.999880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.695 [2024-11-20 07:22:31.999906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59f110 with addr=10.0.0.2, port=4420 00:20:28.695 [2024-11-20 07:22:31.999921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59f110 is same with the state(6) to be set 00:20:28.695 [2024-11-20 07:22:32.000001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.695 [2024-11-20 07:22:32.000028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa75670 with addr=10.0.0.2, port=4420 00:20:28.695 [2024-11-20 07:22:32.000044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75670 is same with the state(6) to be set 00:20:28.695 [2024-11-20 07:22:32.000108] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:32.000179] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:32.000249] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:32.000333] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:32.000428] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:28.695 [2024-11-20 07:22:32.000507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa62c90 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:32.000535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59f110 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:32.000553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa75670 (9): Bad file descriptor 00:20:28.695 [2024-11-20 07:22:32.000677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:28.695 [2024-11-20 07:22:32.000699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:28.695 [2024-11-20 07:22:32.000717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:28.695 [2024-11-20 07:22:32.000733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:28.695 [2024-11-20 07:22:32.000748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:28.695 [2024-11-20 07:22:32.000760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:28.695 [2024-11-20 07:22:32.000772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:28.695 [2024-11-20 07:22:32.000784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:28.695 [2024-11-20 07:22:32.000804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:28.695 [2024-11-20 07:22:32.000816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:28.695 [2024-11-20 07:22:32.000828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:28.695 [2024-11-20 07:22:32.000840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:28.695 [2024-11-20 07:22:32.007434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.007972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.007987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.695 [2024-11-20 07:22:32.008177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.695 [2024-11-20 07:22:32.008193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.008977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.008993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.696 [2024-11-20 07:22:32.009325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.696 [2024-11-20 07:22:32.009346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.009366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.009381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.009396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.009410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.009426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.009440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.009455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb24b30 is same with the state(6) to be set 00:20:28.697 [2024-11-20 07:22:32.010769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.010976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.010992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.697 [2024-11-20 07:22:32.011967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.697 [2024-11-20 07:22:32.011982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.011996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.012717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.012732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83b400 is same with the state(6) to be set 00:20:28.698 [2024-11-20 07:22:32.013977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.698 [2024-11-20 07:22:32.014504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.698 [2024-11-20 07:22:32.014517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.014974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.014990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.699 [2024-11-20 07:22:32.015812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.699 [2024-11-20 07:22:32.015825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.015841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.015854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.015869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.015883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.015898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.015911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.015928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c700 is same with the state(6) to be set 00:20:28.700 [2024-11-20 07:22:32.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.017981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.017994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.700 [2024-11-20 07:22:32.018279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.700 [2024-11-20 07:22:32.018293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.018981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.018997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.019188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.019203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa378e0 is same with the state(6) to be set 00:20:28.701 [2024-11-20 07:22:32.020568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.701 [2024-11-20 07:22:32.020958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.701 [2024-11-20 07:22:32.020973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.020987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.021982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.021996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.702 [2024-11-20 07:22:32.022250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.702 [2024-11-20 07:22:32.022263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.022518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.022532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3b8b0 is same with the state(6) to be set 00:20:28.703 [2024-11-20 07:22:32.023815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.023838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.023860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.023880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.023898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.023913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.023929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.023944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.023960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.023974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.023989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.703 [2024-11-20 07:22:32.024850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.703 [2024-11-20 07:22:32.024863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.024879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.024892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.024908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.024921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.024937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.024950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.024966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.024979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.024994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.025778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.025792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3e3a0 is same with the state(6) to be set 00:20:28.704 [2024-11-20 07:22:32.027069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.704 [2024-11-20 07:22:32.027298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.704 [2024-11-20 07:22:32.027321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.027981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.027994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.705 [2024-11-20 07:22:32.028614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.705 [2024-11-20 07:22:32.028628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.028998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:28.706 [2024-11-20 07:22:32.029011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.706 [2024-11-20 07:22:32.029026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877f10 is same with the state(6) to be set 00:20:28.706 [2024-11-20 07:22:32.031521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:28.706 [2024-11-20 07:22:32.031557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:28.706 [2024-11-20 07:22:32.031576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:28.706 [2024-11-20 07:22:32.031594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:28.706 [2024-11-20 07:22:32.031721] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:20:28.706 [2024-11-20 07:22:32.031748] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:28.706 [2024-11-20 07:22:32.031771] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.047650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:28.964 [2024-11-20 07:22:32.047734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:28.964 task offset: 27136 on job bdev=Nvme5n1 fails 00:20:28.964 00:20:28.964 Latency(us) 00:20:28.964 [2024-11-20T06:22:32.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.964 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme1n1 ended in about 1.00 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme1n1 : 1.00 192.67 12.04 64.22 0.00 246635.14 18835.53 262532.36 00:20:28.964 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme2n1 ended in about 1.00 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme2n1 : 1.00 197.05 12.32 64.02 0.00 238343.47 29127.11 233016.89 00:20:28.964 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme3n1 ended in about 1.00 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme3n1 : 1.00 195.42 12.21 63.81 0.00 235714.88 18155.90 237677.23 00:20:28.964 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme4n1 ended in about 1.01 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme4n1 : 1.01 194.79 12.17 63.60 0.00 232110.03 18641.35 256318.58 00:20:28.964 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme5n1 ended in about 0.98 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme5n1 : 0.98 195.77 12.24 65.26 0.00 224865.19 7573.05 256318.58 00:20:28.964 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme6n1 ended in about 0.98 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme6n1 : 0.98 195.56 12.22 65.19 0.00 220662.42 6262.33 257872.02 00:20:28.964 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme7n1 ended in about 1.01 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme7n1 : 1.01 126.79 7.92 63.40 0.00 297552.40 20583.16 276513.37 00:20:28.964 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme8n1 ended in about 0.98 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme8n1 : 0.98 195.33 12.21 65.11 0.00 212146.44 12524.66 254765.13 00:20:28.964 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme9n1 ended in about 1.01 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme9n1 : 1.01 126.39 7.90 63.19 0.00 287071.70 20388.98 264085.81 00:20:28.964 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.964 Job: Nvme10n1 ended in about 1.02 seconds with error 00:20:28.964 Verification LBA range: start 0x0 length 0x400 00:20:28.964 Nvme10n1 : 1.02 125.98 7.87 62.99 0.00 282387.60 21748.24 284280.60 00:20:28.964 [2024-11-20T06:22:32.397Z] =================================================================================================================== 00:20:28.964 [2024-11-20T06:22:32.397Z] Total : 1745.75 109.11 640.79 0.00 244355.95 6262.33 284280.60 00:20:28.964 [2024-11-20 07:22:32.075836] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:28.964 [2024-11-20 07:22:32.075926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:28.964 [2024-11-20 07:22:32.076240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.076277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6376f0 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.076299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6376f0 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.076440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.076467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62b6e0 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.076483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b6e0 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.076572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.076597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x633890 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.076613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x633890 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.076707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.076735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6366e0 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.076751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6366e0 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.076797] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.076822] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.076842] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.076873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6366e0 (9): Bad file descriptor 00:20:28.964 [2024-11-20 07:22:32.076902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x633890 (9): Bad file descriptor 00:20:28.964 [2024-11-20 07:22:32.076925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62b6e0 (9): Bad file descriptor 00:20:28.964 [2024-11-20 07:22:32.076961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6376f0 (9): Bad file descriptor 00:20:28.964 1745.75 IOPS, 109.11 MiB/s [2024-11-20T06:22:32.397Z] [2024-11-20 07:22:32.078978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:28.964 [2024-11-20 07:22:32.079005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:28.964 [2024-11-20 07:22:32.079149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.079177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa57f40 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.079194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57f40 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.079287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.079320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa6040 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.079337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6040 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.079426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.079451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaae4c0 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.079467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4c0 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.079507] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.079530] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.079548] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.079570] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.079589] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:28.964 [2024-11-20 07:22:32.079891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:28.964 [2024-11-20 07:22:32.080019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.080047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa75670 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.080063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75670 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.080138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.964 [2024-11-20 07:22:32.080164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59f110 with addr=10.0.0.2, port=4420 00:20:28.964 [2024-11-20 07:22:32.080179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59f110 is same with the state(6) to be set 00:20:28.964 [2024-11-20 07:22:32.080196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa57f40 (9): Bad file descriptor 00:20:28.965 [2024-11-20 07:22:32.080215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6040 (9): Bad file descriptor 00:20:28.965 [2024-11-20 07:22:32.080233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaae4c0 (9): Bad file descriptor 00:20:28.965 [2024-11-20 07:22:32.080249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:28.965 [2024-11-20 07:22:32.080669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa62c90 with addr=10.0.0.2, port=4420 00:20:28.965 [2024-11-20 07:22:32.080685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa62c90 is same with the state(6) to be set 00:20:28.965 [2024-11-20 07:22:32.080702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa75670 (9): Bad file descriptor 00:20:28.965 [2024-11-20 07:22:32.080721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59f110 (9): Bad file descriptor 00:20:28.965 [2024-11-20 07:22:32.080736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa62c90 (9): Bad file descriptor 00:20:28.965 [2024-11-20 07:22:32.080937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.080950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.080963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.080975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.080988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.081000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.081012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.081023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:28.965 [2024-11-20 07:22:32.081351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:28.965 [2024-11-20 07:22:32.081373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:28.965 [2024-11-20 07:22:32.081387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:28.965 [2024-11-20 07:22:32.081399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:29.223 07:22:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2548731 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2548731 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2548731 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.158 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:30.159 rmmod nvme_tcp 00:20:30.159 rmmod nvme_fabrics 00:20:30.159 rmmod nvme_keyring 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2548554 ']' 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2548554 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2548554 ']' 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2548554 00:20:30.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2548554) - No such process 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2548554 is not found' 00:20:30.159 Process with pid 2548554 is not found 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:30.159 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:30.418 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.418 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:30.418 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.418 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.418 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:32.323 00:20:32.323 real 0m7.548s 00:20:32.323 user 0m18.893s 00:20:32.323 sys 0m1.463s 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:32.323 ************************************ 00:20:32.323 END TEST nvmf_shutdown_tc3 00:20:32.323 ************************************ 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:32.323 ************************************ 00:20:32.323 START TEST nvmf_shutdown_tc4 00:20:32.323 ************************************ 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.323 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:32.323 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:32.324 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:32.324 Found net devices under 0000:09:00.0: cvl_0_0 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:32.324 Found net devices under 0000:09:00.1: cvl_0_1 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.324 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:32.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:20:32.582 00:20:32.582 --- 10.0.0.2 ping statistics --- 00:20:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.582 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:20:32.582 00:20:32.582 --- 10.0.0.1 ping statistics --- 00:20:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.582 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:32.582 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2549645 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2549645 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2549645 ']' 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:32.583 07:22:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.583 [2024-11-20 07:22:35.934993] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:32.583 [2024-11-20 07:22:35.935078] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.841 [2024-11-20 07:22:36.015276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.841 [2024-11-20 07:22:36.076495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.841 [2024-11-20 07:22:36.076554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.841 [2024-11-20 07:22:36.076567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.841 [2024-11-20 07:22:36.076578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.841 [2024-11-20 07:22:36.076588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.841 [2024-11-20 07:22:36.078144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.841 [2024-11-20 07:22:36.078201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.841 [2024-11-20 07:22:36.078267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:32.841 [2024-11-20 07:22:36.078270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.841 [2024-11-20 07:22:36.240394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:32.841 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.100 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.100 Malloc1 00:20:33.100 [2024-11-20 07:22:36.335659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.100 Malloc2 00:20:33.100 Malloc3 00:20:33.100 Malloc4 00:20:33.100 Malloc5 00:20:33.389 Malloc6 00:20:33.389 Malloc7 00:20:33.389 Malloc8 00:20:33.389 Malloc9 00:20:33.389 Malloc10 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2549821 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:33.389 07:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:33.672 [2024-11-20 07:22:36.845396] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2549645 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2549645 ']' 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2549645 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2549645 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2549645' 00:20:38.950 killing process with pid 2549645 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2549645 00:20:38.950 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2549645 00:20:38.950 [2024-11-20 07:22:41.837327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.837663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7e0 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 [2024-11-20 07:22:41.838708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce970 is same with the state(6) to be set 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 [2024-11-20 07:22:41.841018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 starting I/O failed: -6 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.950 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 [2024-11-20 07:22:41.842034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 [2024-11-20 07:22:41.843291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.951 Write completed with error (sct=0, sc=8) 00:20:38.951 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 [2024-11-20 07:22:41.844873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.952 NVMe io qpair process completion error 00:20:38.952 [2024-11-20 07:22:41.847030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082520 is same with the state(6) to be set 00:20:38.952 [2024-11-20 07:22:41.847070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082520 is same with the state(6) to be set 00:20:38.952 [2024-11-20 07:22:41.847088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082520 is same with the state(6) to be set 00:20:38.952 [2024-11-20 07:22:41.847101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082520 is same with the state(6) to be set 00:20:38.952 [2024-11-20 07:22:41.847113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082520 is same with the state(6) to be set 00:20:38.952 [2024-11-20 07:22:41.847126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082520 is same with the state(6) to be set 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 [2024-11-20 07:22:41.849948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 [2024-11-20 07:22:41.850957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 Write completed with error (sct=0, sc=8) 00:20:38.952 starting I/O failed: -6 00:20:38.952 [2024-11-20 07:22:41.851546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with Write completed with error (sct=0, sc=8) 00:20:38.952 the state(6) to be set 00:20:38.952 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.851604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.851621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.851634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.851646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.851663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.851675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7a0 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.852015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with Write completed with error (sct=0, sc=8) 00:20:38.953 the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.852048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.852064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.852076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.953 [2024-11-20 07:22:41.852115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with starting I/O failed: -6 00:20:38.953 the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 [2024-11-20 07:22:41.852193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106bc70 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.852729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106c160 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.852773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106c160 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 [2024-11-20 07:22:41.852793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106c160 is same with the state(6) to be set 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.852807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106c160 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 [2024-11-20 07:22:41.852820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106c160 is same with the state(6) to be set 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.953 Write completed with error (sct=0, sc=8) 00:20:38.953 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 [2024-11-20 07:22:41.853804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.954 NVMe io qpair process completion error 00:20:38.954 [2024-11-20 07:22:41.854376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106e880 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 [2024-11-20 07:22:41.854908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106d9f0 is same with the state(6) to be set 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 [2024-11-20 07:22:41.855619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 Write completed with error (sct=0, sc=8) 00:20:38.954 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 [2024-11-20 07:22:41.857547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 [2024-11-20 07:22:41.859495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.955 NVMe io qpair process completion error 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 starting I/O failed: -6 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.955 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 [2024-11-20 07:22:41.860802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 [2024-11-20 07:22:41.861908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 [2024-11-20 07:22:41.863015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.956 Write completed with error (sct=0, sc=8) 00:20:38.956 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 [2024-11-20 07:22:41.864953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.957 NVMe io qpair process completion error 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 [2024-11-20 07:22:41.866281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 [2024-11-20 07:22:41.867320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 Write completed with error (sct=0, sc=8) 00:20:38.957 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 [2024-11-20 07:22:41.868548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 [2024-11-20 07:22:41.871071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.958 NVMe io qpair process completion error 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 starting I/O failed: -6 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.958 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 [2024-11-20 07:22:41.872447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.959 starting I/O failed: -6 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 [2024-11-20 07:22:41.873548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 [2024-11-20 07:22:41.874658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.959 Write completed with error (sct=0, sc=8) 00:20:38.959 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 [2024-11-20 07:22:41.878087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.960 NVMe io qpair process completion error 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 [2024-11-20 07:22:41.879420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.960 starting I/O failed: -6 00:20:38.960 starting I/O failed: -6 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 [2024-11-20 07:22:41.880547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed: -6 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 [2024-11-20 07:22:41.881665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 [2024-11-20 07:22:41.883659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.961 NVMe io qpair process completion error 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 Write completed with error (sct=0, sc=8) 00:20:38.961 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.962 Write completed with error (sct=0, sc=8) 00:20:38.962 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 [2024-11-20 07:22:41.888611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.963 NVMe io qpair process completion error 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 [2024-11-20 07:22:41.889971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 [2024-11-20 07:22:41.890993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.963 Write completed with error (sct=0, sc=8) 00:20:38.963 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 [2024-11-20 07:22:41.892074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 [2024-11-20 07:22:41.894192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.964 NVMe io qpair process completion error 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 Write completed with error (sct=0, sc=8) 00:20:38.964 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 [2024-11-20 07:22:41.895524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.965 starting I/O failed: -6 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 [2024-11-20 07:22:41.896594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.965 starting I/O failed: -6 00:20:38.965 starting I/O failed: -6 00:20:38.965 starting I/O failed: -6 00:20:38.965 starting I/O failed: -6 00:20:38.965 starting I/O failed: -6 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 [2024-11-20 07:22:41.898030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.965 Write completed with error (sct=0, sc=8) 00:20:38.965 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 Write completed with error (sct=0, sc=8) 00:20:38.966 starting I/O failed: -6 00:20:38.966 [2024-11-20 07:22:41.901803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.966 NVMe io qpair process completion error 00:20:38.966 Initializing NVMe Controllers 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:38.966 Controller IO queue size 128, less than required. 00:20:38.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:38.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:38.966 Initialization complete. Launching workers. 00:20:38.966 ======================================================== 00:20:38.966 Latency(us) 00:20:38.966 Device Information : IOPS MiB/s Average min max 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1827.41 78.52 70062.94 1077.41 134913.76 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1819.20 78.17 69600.50 860.73 122820.85 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1849.94 79.49 69181.97 781.25 120894.81 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1827.62 78.53 69294.91 1013.88 120836.61 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1835.63 78.87 69017.08 766.28 117023.77 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1817.94 78.11 69710.60 1122.30 115770.96 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1746.77 75.06 72574.93 1019.52 122564.54 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1826.99 78.50 69425.32 781.45 125684.11 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1833.73 78.79 69212.78 798.92 116803.51 00:20:38.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1817.73 78.11 69834.49 856.09 132914.35 00:20:38.966 ======================================================== 00:20:38.966 Total : 18202.97 782.16 69778.03 766.28 134913.76 00:20:38.966 00:20:38.966 [2024-11-20 07:22:41.907280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b02c0 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1720 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0920 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1900 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b0c50 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23af9e0 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23af6b0 is same with the state(6) to be set 00:20:38.966 [2024-11-20 07:22:41.907744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23afd10 is same with the state(6) to be set 00:20:38.967 [2024-11-20 07:22:41.907801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b05f0 is same with the state(6) to be set 00:20:38.967 [2024-11-20 07:22:41.907858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1ae0 is same with the state(6) to be set 00:20:38.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:38.967 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:39.904 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2549821 00:20:39.904 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:20:39.904 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2549821 00:20:39.904 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2549821 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.163 rmmod nvme_tcp 00:20:40.163 rmmod nvme_fabrics 00:20:40.163 rmmod nvme_keyring 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2549645 ']' 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2549645 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2549645 ']' 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2549645 00:20:40.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2549645) - No such process 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2549645 is not found' 00:20:40.163 Process with pid 2549645 is not found 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.163 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.064 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.064 00:20:42.064 real 0m9.774s 00:20:42.064 user 0m24.050s 00:20:42.064 sys 0m5.508s 00:20:42.064 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:42.064 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.064 ************************************ 00:20:42.064 END TEST nvmf_shutdown_tc4 00:20:42.064 ************************************ 00:20:42.064 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:42.064 00:20:42.064 real 0m37.243s 00:20:42.064 user 1m40.812s 00:20:42.064 sys 0m11.922s 00:20:42.064 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:42.064 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:42.064 ************************************ 00:20:42.064 END TEST nvmf_shutdown 00:20:42.064 ************************************ 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.323 ************************************ 00:20:42.323 START TEST nvmf_nsid 00:20:42.323 ************************************ 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:42.323 * Looking for test storage... 00:20:42.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:42.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.323 --rc genhtml_branch_coverage=1 00:20:42.323 --rc genhtml_function_coverage=1 00:20:42.323 --rc genhtml_legend=1 00:20:42.323 --rc geninfo_all_blocks=1 00:20:42.323 --rc geninfo_unexecuted_blocks=1 00:20:42.323 00:20:42.323 ' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:42.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.323 --rc genhtml_branch_coverage=1 00:20:42.323 --rc genhtml_function_coverage=1 00:20:42.323 --rc genhtml_legend=1 00:20:42.323 --rc geninfo_all_blocks=1 00:20:42.323 --rc geninfo_unexecuted_blocks=1 00:20:42.323 00:20:42.323 ' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:42.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.323 --rc genhtml_branch_coverage=1 00:20:42.323 --rc genhtml_function_coverage=1 00:20:42.323 --rc genhtml_legend=1 00:20:42.323 --rc geninfo_all_blocks=1 00:20:42.323 --rc geninfo_unexecuted_blocks=1 00:20:42.323 00:20:42.323 ' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:42.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.323 --rc genhtml_branch_coverage=1 00:20:42.323 --rc genhtml_function_coverage=1 00:20:42.323 --rc genhtml_legend=1 00:20:42.323 --rc geninfo_all_blocks=1 00:20:42.323 --rc geninfo_unexecuted_blocks=1 00:20:42.323 00:20:42.323 ' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:42.323 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.324 07:22:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.855 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.855 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:44.856 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:44.856 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:44.856 Found net devices under 0000:09:00.0: cvl_0_0 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:44.856 Found net devices under 0000:09:00.1: cvl_0_1 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:20:44.856 00:20:44.856 --- 10.0.0.2 ping statistics --- 00:20:44.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.856 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:20:44.856 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:20:44.857 00:20:44.857 --- 10.0.0.1 ping statistics --- 00:20:44.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.857 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.857 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2552563 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2552563 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2552563 ']' 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.857 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.857 [2024-11-20 07:22:48.069500] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:44.857 [2024-11-20 07:22:48.069563] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.857 [2024-11-20 07:22:48.135155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.857 [2024-11-20 07:22:48.188362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.857 [2024-11-20 07:22:48.188416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.857 [2024-11-20 07:22:48.188440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.857 [2024-11-20 07:22:48.188451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.857 [2024-11-20 07:22:48.188461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.857 [2024-11-20 07:22:48.189039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2552589 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=74b6ba84-9b4a-4449-9477-f29e8805a155 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:45.115 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4a5da523-4cf9-4ba8-b950-8e81d1d80e56 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=b8cbd3db-1a1b-44cf-a047-7cba7dc8c83b 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:45.116 null0 00:20:45.116 null1 00:20:45.116 null2 00:20:45.116 [2024-11-20 07:22:48.367035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.116 [2024-11-20 07:22:48.377768] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:45.116 [2024-11-20 07:22:48.377826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552589 ] 00:20:45.116 [2024-11-20 07:22:48.391239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2552589 /var/tmp/tgt2.sock 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2552589 ']' 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:45.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.116 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:45.116 [2024-11-20 07:22:48.444720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.116 [2024-11-20 07:22:48.503048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.374 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:45.374 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:20:45.374 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:45.943 [2024-11-20 07:22:49.145448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.943 [2024-11-20 07:22:49.161661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:45.943 nvme0n1 nvme0n2 00:20:45.943 nvme1n1 00:20:45.943 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:45.943 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:45.943 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.511 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:20:46.512 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:20:47.445 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 74b6ba84-9b4a-4449-9477-f29e8805a155 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=74b6ba849b4a44499477f29e8805a155 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 74B6BA849B4A44499477F29E8805A155 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 74B6BA849B4A44499477F29E8805A155 == \7\4\B\6\B\A\8\4\9\B\4\A\4\4\4\9\9\4\7\7\F\2\9\E\8\8\0\5\A\1\5\5 ]] 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4a5da523-4cf9-4ba8-b950-8e81d1d80e56 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:47.446 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4a5da5234cf94ba8b9508e81d1d80e56 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4A5DA5234CF94BA8B9508E81D1D80E56 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4A5DA5234CF94BA8B9508E81D1D80E56 == \4\A\5\D\A\5\2\3\4\C\F\9\4\B\A\8\B\9\5\0\8\E\8\1\D\1\D\8\0\E\5\6 ]] 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid b8cbd3db-1a1b-44cf-a047-7cba7dc8c83b 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b8cbd3db1a1b44cfa0477cba7dc8c83b 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B8CBD3DB1A1B44CFA0477CBA7DC8C83B 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ B8CBD3DB1A1B44CFA0477CBA7DC8C83B == \B\8\C\B\D\3\D\B\1\A\1\B\4\4\C\F\A\0\4\7\7\C\B\A\7\D\C\8\C\8\3\B ]] 00:20:47.704 07:22:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2552589 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2552589 ']' 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2552589 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:47.704 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2552589 00:20:47.961 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:47.961 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:47.962 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2552589' 00:20:47.962 killing process with pid 2552589 00:20:47.962 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2552589 00:20:47.962 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2552589 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.219 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.219 rmmod nvme_tcp 00:20:48.219 rmmod nvme_fabrics 00:20:48.219 rmmod nvme_keyring 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2552563 ']' 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2552563 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2552563 ']' 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2552563 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2552563 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2552563' 00:20:48.477 killing process with pid 2552563 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2552563 00:20:48.477 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2552563 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.737 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.643 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.643 00:20:50.643 real 0m8.415s 00:20:50.643 user 0m8.210s 00:20:50.643 sys 0m2.676s 00:20:50.643 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:50.643 07:22:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:50.643 ************************************ 00:20:50.643 END TEST nvmf_nsid 00:20:50.643 ************************************ 00:20:50.643 07:22:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:50.643 00:20:50.643 real 11m40.441s 00:20:50.643 user 27m40.886s 00:20:50.643 sys 2m47.762s 00:20:50.643 07:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:50.643 07:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.643 ************************************ 00:20:50.643 END TEST nvmf_target_extra 00:20:50.643 ************************************ 00:20:50.643 07:22:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:50.643 07:22:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:50.643 07:22:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:50.643 07:22:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.643 ************************************ 00:20:50.643 START TEST nvmf_host 00:20:50.643 ************************************ 00:20:50.643 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:50.902 * Looking for test storage... 00:20:50.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.902 --rc genhtml_branch_coverage=1 00:20:50.902 --rc genhtml_function_coverage=1 00:20:50.902 --rc genhtml_legend=1 00:20:50.902 --rc geninfo_all_blocks=1 00:20:50.902 --rc geninfo_unexecuted_blocks=1 00:20:50.902 00:20:50.902 ' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.902 --rc genhtml_branch_coverage=1 00:20:50.902 --rc genhtml_function_coverage=1 00:20:50.902 --rc genhtml_legend=1 00:20:50.902 --rc geninfo_all_blocks=1 00:20:50.902 --rc geninfo_unexecuted_blocks=1 00:20:50.902 00:20:50.902 ' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.902 --rc genhtml_branch_coverage=1 00:20:50.902 --rc genhtml_function_coverage=1 00:20:50.902 --rc genhtml_legend=1 00:20:50.902 --rc geninfo_all_blocks=1 00:20:50.902 --rc geninfo_unexecuted_blocks=1 00:20:50.902 00:20:50.902 ' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:50.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.902 --rc genhtml_branch_coverage=1 00:20:50.902 --rc genhtml_function_coverage=1 00:20:50.902 --rc genhtml_legend=1 00:20:50.902 --rc geninfo_all_blocks=1 00:20:50.902 --rc geninfo_unexecuted_blocks=1 00:20:50.902 00:20:50.902 ' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:50.902 07:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.903 ************************************ 00:20:50.903 START TEST nvmf_multicontroller 00:20:50.903 ************************************ 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:50.903 * Looking for test storage... 00:20:50.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:50.903 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.162 --rc genhtml_branch_coverage=1 00:20:51.162 --rc genhtml_function_coverage=1 00:20:51.162 --rc genhtml_legend=1 00:20:51.162 --rc geninfo_all_blocks=1 00:20:51.162 --rc geninfo_unexecuted_blocks=1 00:20:51.162 00:20:51.162 ' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.162 --rc genhtml_branch_coverage=1 00:20:51.162 --rc genhtml_function_coverage=1 00:20:51.162 --rc genhtml_legend=1 00:20:51.162 --rc geninfo_all_blocks=1 00:20:51.162 --rc geninfo_unexecuted_blocks=1 00:20:51.162 00:20:51.162 ' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.162 --rc genhtml_branch_coverage=1 00:20:51.162 --rc genhtml_function_coverage=1 00:20:51.162 --rc genhtml_legend=1 00:20:51.162 --rc geninfo_all_blocks=1 00:20:51.162 --rc geninfo_unexecuted_blocks=1 00:20:51.162 00:20:51.162 ' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:51.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.162 --rc genhtml_branch_coverage=1 00:20:51.162 --rc genhtml_function_coverage=1 00:20:51.162 --rc genhtml_legend=1 00:20:51.162 --rc geninfo_all_blocks=1 00:20:51.162 --rc geninfo_unexecuted_blocks=1 00:20:51.162 00:20:51.162 ' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.162 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.163 07:22:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.067 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:53.068 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:53.068 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:53.068 Found net devices under 0000:09:00.0: cvl_0_0 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:53.068 Found net devices under 0000:09:00.1: cvl_0_1 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.068 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.326 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:53.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:20:53.327 00:20:53.327 --- 10.0.0.2 ping statistics --- 00:20:53.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.327 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:53.327 00:20:53.327 --- 10.0.0.1 ping statistics --- 00:20:53.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.327 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2555022 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2555022 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2555022 ']' 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.327 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.327 [2024-11-20 07:22:56.713253] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:53.327 [2024-11-20 07:22:56.713366] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.585 [2024-11-20 07:22:56.787948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.585 [2024-11-20 07:22:56.848531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.585 [2024-11-20 07:22:56.848599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.585 [2024-11-20 07:22:56.848613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.585 [2024-11-20 07:22:56.848625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.585 [2024-11-20 07:22:56.848634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.585 [2024-11-20 07:22:56.850132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.585 [2024-11-20 07:22:56.850196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.585 [2024-11-20 07:22:56.850199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.585 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.586 07:22:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.586 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.586 [2024-11-20 07:22:57.005267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.586 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.586 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:53.586 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.586 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 Malloc0 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 [2024-11-20 07:22:57.066156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 [2024-11-20 07:22:57.074054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 Malloc1 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2555122 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2555122 /var/tmp/bdevperf.sock 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2555122 ']' 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.844 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.103 NVMe0n1 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.103 1 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.103 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.103 request: 00:20:54.103 { 00:20:54.103 "name": "NVMe0", 00:20:54.103 "trtype": "tcp", 00:20:54.103 "traddr": "10.0.0.2", 00:20:54.103 "adrfam": "ipv4", 00:20:54.103 "trsvcid": "4420", 00:20:54.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.103 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:54.103 "hostaddr": "10.0.0.1", 00:20:54.103 "prchk_reftag": false, 00:20:54.103 "prchk_guard": false, 00:20:54.103 "hdgst": false, 00:20:54.103 "ddgst": false, 00:20:54.103 "allow_unrecognized_csi": false, 00:20:54.103 "method": "bdev_nvme_attach_controller", 00:20:54.103 "req_id": 1 00:20:54.103 } 00:20:54.103 Got JSON-RPC error response 00:20:54.103 response: 00:20:54.103 { 00:20:54.103 "code": -114, 00:20:54.104 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.104 } 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.104 request: 00:20:54.104 { 00:20:54.104 "name": "NVMe0", 00:20:54.104 "trtype": "tcp", 00:20:54.104 "traddr": "10.0.0.2", 00:20:54.104 "adrfam": "ipv4", 00:20:54.104 "trsvcid": "4420", 00:20:54.104 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.104 "hostaddr": "10.0.0.1", 00:20:54.104 "prchk_reftag": false, 00:20:54.104 "prchk_guard": false, 00:20:54.104 "hdgst": false, 00:20:54.104 "ddgst": false, 00:20:54.104 "allow_unrecognized_csi": false, 00:20:54.104 "method": "bdev_nvme_attach_controller", 00:20:54.104 "req_id": 1 00:20:54.104 } 00:20:54.104 Got JSON-RPC error response 00:20:54.104 response: 00:20:54.104 { 00:20:54.104 "code": -114, 00:20:54.104 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.104 } 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.104 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.363 request: 00:20:54.363 { 00:20:54.363 "name": "NVMe0", 00:20:54.363 "trtype": "tcp", 00:20:54.363 "traddr": "10.0.0.2", 00:20:54.363 "adrfam": "ipv4", 00:20:54.363 "trsvcid": "4420", 00:20:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.363 "hostaddr": "10.0.0.1", 00:20:54.363 "prchk_reftag": false, 00:20:54.363 "prchk_guard": false, 00:20:54.363 "hdgst": false, 00:20:54.363 "ddgst": false, 00:20:54.363 "multipath": "disable", 00:20:54.363 "allow_unrecognized_csi": false, 00:20:54.363 "method": "bdev_nvme_attach_controller", 00:20:54.363 "req_id": 1 00:20:54.363 } 00:20:54.363 Got JSON-RPC error response 00:20:54.363 response: 00:20:54.363 { 00:20:54.363 "code": -114, 00:20:54.363 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:54.363 } 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.363 request: 00:20:54.363 { 00:20:54.363 "name": "NVMe0", 00:20:54.363 "trtype": "tcp", 00:20:54.363 "traddr": "10.0.0.2", 00:20:54.363 "adrfam": "ipv4", 00:20:54.363 "trsvcid": "4420", 00:20:54.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.363 "hostaddr": "10.0.0.1", 00:20:54.363 "prchk_reftag": false, 00:20:54.363 "prchk_guard": false, 00:20:54.363 "hdgst": false, 00:20:54.363 "ddgst": false, 00:20:54.363 "multipath": "failover", 00:20:54.363 "allow_unrecognized_csi": false, 00:20:54.363 "method": "bdev_nvme_attach_controller", 00:20:54.363 "req_id": 1 00:20:54.363 } 00:20:54.363 Got JSON-RPC error response 00:20:54.363 response: 00:20:54.363 { 00:20:54.363 "code": -114, 00:20:54.363 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.363 } 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.363 NVMe0n1 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.363 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.621 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:54.621 07:22:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:55.995 { 00:20:55.995 "results": [ 00:20:55.995 { 00:20:55.995 "job": "NVMe0n1", 00:20:55.995 "core_mask": "0x1", 00:20:55.995 "workload": "write", 00:20:55.995 "status": "finished", 00:20:55.995 "queue_depth": 128, 00:20:55.995 "io_size": 4096, 00:20:55.995 "runtime": 1.007473, 00:20:55.995 "iops": 18280.390640741738, 00:20:55.995 "mibps": 71.40777594039741, 00:20:55.995 "io_failed": 0, 00:20:55.995 "io_timeout": 0, 00:20:55.995 "avg_latency_us": 6986.351186162542, 00:20:55.995 "min_latency_us": 4126.34074074074, 00:20:55.995 "max_latency_us": 14757.736296296296 00:20:55.995 } 00:20:55.995 ], 00:20:55.995 "core_count": 1 00:20:55.995 } 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2555122 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2555122 ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2555122 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2555122 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2555122' 00:20:55.995 killing process with pid 2555122 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2555122 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2555122 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:20:55.995 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:55.995 [2024-11-20 07:22:57.184326] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:20:55.995 [2024-11-20 07:22:57.184421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555122 ] 00:20:55.995 [2024-11-20 07:22:57.252792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.995 [2024-11-20 07:22:57.312967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.995 [2024-11-20 07:22:57.856538] bdev.c:4897:bdev_name_add: *ERROR*: Bdev name 44b98d74-5923-40aa-97ec-a773d46b5d26 already exists 00:20:55.995 [2024-11-20 07:22:57.856575] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:44b98d74-5923-40aa-97ec-a773d46b5d26 alias for bdev NVMe1n1 00:20:55.995 [2024-11-20 07:22:57.856605] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:55.995 Running I/O for 1 seconds... 00:20:55.995 18226.00 IOPS, 71.20 MiB/s 00:20:55.995 Latency(us) 00:20:55.995 [2024-11-20T06:22:59.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.995 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:55.995 NVMe0n1 : 1.01 18280.39 71.41 0.00 0.00 6986.35 4126.34 14757.74 00:20:55.995 [2024-11-20T06:22:59.428Z] =================================================================================================================== 00:20:55.995 [2024-11-20T06:22:59.428Z] Total : 18280.39 71.41 0.00 0.00 6986.35 4126.34 14757.74 00:20:55.995 Received shutdown signal, test time was about 1.000000 seconds 00:20:55.995 00:20:55.995 Latency(us) 00:20:55.995 [2024-11-20T06:22:59.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.995 [2024-11-20T06:22:59.428Z] =================================================================================================================== 00:20:55.995 [2024-11-20T06:22:59.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.995 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.995 rmmod nvme_tcp 00:20:55.995 rmmod nvme_fabrics 00:20:55.995 rmmod nvme_keyring 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2555022 ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2555022 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2555022 ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2555022 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2555022 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2555022' 00:20:55.995 killing process with pid 2555022 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2555022 00:20:55.995 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2555022 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.253 07:22:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.789 00:20:58.789 real 0m7.476s 00:20:58.789 user 0m11.280s 00:20:58.789 sys 0m2.487s 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.789 ************************************ 00:20:58.789 END TEST nvmf_multicontroller 00:20:58.789 ************************************ 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.789 ************************************ 00:20:58.789 START TEST nvmf_aer 00:20:58.789 ************************************ 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:58.789 * Looking for test storage... 00:20:58.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.789 --rc genhtml_branch_coverage=1 00:20:58.789 --rc genhtml_function_coverage=1 00:20:58.789 --rc genhtml_legend=1 00:20:58.789 --rc geninfo_all_blocks=1 00:20:58.789 --rc geninfo_unexecuted_blocks=1 00:20:58.789 00:20:58.789 ' 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.789 --rc genhtml_branch_coverage=1 00:20:58.789 --rc genhtml_function_coverage=1 00:20:58.789 --rc genhtml_legend=1 00:20:58.789 --rc geninfo_all_blocks=1 00:20:58.789 --rc geninfo_unexecuted_blocks=1 00:20:58.789 00:20:58.789 ' 00:20:58.789 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.790 --rc genhtml_branch_coverage=1 00:20:58.790 --rc genhtml_function_coverage=1 00:20:58.790 --rc genhtml_legend=1 00:20:58.790 --rc geninfo_all_blocks=1 00:20:58.790 --rc geninfo_unexecuted_blocks=1 00:20:58.790 00:20:58.790 ' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:58.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.790 --rc genhtml_branch_coverage=1 00:20:58.790 --rc genhtml_function_coverage=1 00:20:58.790 --rc genhtml_legend=1 00:20:58.790 --rc geninfo_all_blocks=1 00:20:58.790 --rc geninfo_unexecuted_blocks=1 00:20:58.790 00:20:58.790 ' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.790 07:23:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:00.692 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:00.692 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:00.692 Found net devices under 0000:09:00.0: cvl_0_0 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:00.692 Found net devices under 0000:09:00.1: cvl_0_1 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.692 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:21:00.951 00:21:00.951 --- 10.0.0.2 ping statistics --- 00:21:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.951 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:21:00.951 00:21:00.951 --- 10.0.0.1 ping statistics --- 00:21:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.951 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2557386 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2557386 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2557386 ']' 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.951 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.951 [2024-11-20 07:23:04.301697] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:00.951 [2024-11-20 07:23:04.301768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.951 [2024-11-20 07:23:04.375029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.209 [2024-11-20 07:23:04.434234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.210 [2024-11-20 07:23:04.434283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.210 [2024-11-20 07:23:04.434332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.210 [2024-11-20 07:23:04.434346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.210 [2024-11-20 07:23:04.434357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.210 [2024-11-20 07:23:04.435876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.210 [2024-11-20 07:23:04.435941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.210 [2024-11-20 07:23:04.435998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.210 [2024-11-20 07:23:04.436002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.210 [2024-11-20 07:23:04.589860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.210 Malloc0 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.210 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.468 [2024-11-20 07:23:04.652328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.468 [ 00:21:01.468 { 00:21:01.468 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:01.468 "subtype": "Discovery", 00:21:01.468 "listen_addresses": [], 00:21:01.468 "allow_any_host": true, 00:21:01.468 "hosts": [] 00:21:01.468 }, 00:21:01.468 { 00:21:01.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.468 "subtype": "NVMe", 00:21:01.468 "listen_addresses": [ 00:21:01.468 { 00:21:01.468 "trtype": "TCP", 00:21:01.468 "adrfam": "IPv4", 00:21:01.468 "traddr": "10.0.0.2", 00:21:01.468 "trsvcid": "4420" 00:21:01.468 } 00:21:01.468 ], 00:21:01.468 "allow_any_host": true, 00:21:01.468 "hosts": [], 00:21:01.468 "serial_number": "SPDK00000000000001", 00:21:01.468 "model_number": "SPDK bdev Controller", 00:21:01.468 "max_namespaces": 2, 00:21:01.468 "min_cntlid": 1, 00:21:01.468 "max_cntlid": 65519, 00:21:01.468 "namespaces": [ 00:21:01.468 { 00:21:01.468 "nsid": 1, 00:21:01.468 "bdev_name": "Malloc0", 00:21:01.468 "name": "Malloc0", 00:21:01.468 "nguid": "7AB697D2C3314707B0E7798E6973900C", 00:21:01.468 "uuid": "7ab697d2-c331-4707-b0e7-798e6973900c" 00:21:01.468 } 00:21:01.468 ] 00:21:01.468 } 00:21:01.468 ] 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2557414 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:21:01.468 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:01.726 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.726 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.726 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:01.726 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:01.726 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.726 07:23:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.726 Malloc1 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.726 [ 00:21:01.726 { 00:21:01.726 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:01.726 "subtype": "Discovery", 00:21:01.726 "listen_addresses": [], 00:21:01.726 "allow_any_host": true, 00:21:01.726 "hosts": [] 00:21:01.726 }, 00:21:01.726 { 00:21:01.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.726 "subtype": "NVMe", 00:21:01.726 "listen_addresses": [ 00:21:01.726 { 00:21:01.726 "trtype": "TCP", 00:21:01.726 "adrfam": "IPv4", 00:21:01.726 "traddr": "10.0.0.2", 00:21:01.726 "trsvcid": "4420" 00:21:01.726 } 00:21:01.726 ], 00:21:01.726 "allow_any_host": true, 00:21:01.726 "hosts": [], 00:21:01.726 "serial_number": "SPDK00000000000001", 00:21:01.726 "model_number": "SPDK bdev Controller", 00:21:01.726 "max_namespaces": 2, 00:21:01.726 "min_cntlid": 1, 00:21:01.726 "max_cntlid": 65519, 00:21:01.726 "namespaces": [ 00:21:01.726 { 00:21:01.726 "nsid": 1, 00:21:01.726 "bdev_name": "Malloc0", 00:21:01.726 "name": "Malloc0", 00:21:01.726 "nguid": "7AB697D2C3314707B0E7798E6973900C", 00:21:01.726 "uuid": "7ab697d2-c331-4707-b0e7-798e6973900c" 00:21:01.726 }, 00:21:01.726 { 00:21:01.726 "nsid": 2, 00:21:01.726 "bdev_name": "Malloc1", 00:21:01.726 "name": "Malloc1", 00:21:01.726 "nguid": "82060E00736A471EAD5297F38369B883", 00:21:01.726 "uuid": "82060e00-736a-471e-ad52-97f38369b883" 00:21:01.726 } 00:21:01.726 ] 00:21:01.726 } 00:21:01.726 ] 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.726 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2557414 00:21:01.726 Asynchronous Event Request test 00:21:01.726 Attaching to 10.0.0.2 00:21:01.726 Attached to 10.0.0.2 00:21:01.726 Registering asynchronous event callbacks... 00:21:01.727 Starting namespace attribute notice tests for all controllers... 00:21:01.727 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:01.727 aer_cb - Changed Namespace 00:21:01.727 Cleaning up... 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.727 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.727 rmmod nvme_tcp 00:21:01.727 rmmod nvme_fabrics 00:21:01.985 rmmod nvme_keyring 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2557386 ']' 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2557386 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2557386 ']' 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2557386 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2557386 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2557386' 00:21:01.985 killing process with pid 2557386 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2557386 00:21:01.985 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2557386 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.244 07:23:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.150 00:21:04.150 real 0m5.787s 00:21:04.150 user 0m4.923s 00:21:04.150 sys 0m2.115s 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:04.150 ************************************ 00:21:04.150 END TEST nvmf_aer 00:21:04.150 ************************************ 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.150 ************************************ 00:21:04.150 START TEST nvmf_async_init 00:21:04.150 ************************************ 00:21:04.150 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:04.409 * Looking for test storage... 00:21:04.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:04.409 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.410 --rc genhtml_branch_coverage=1 00:21:04.410 --rc genhtml_function_coverage=1 00:21:04.410 --rc genhtml_legend=1 00:21:04.410 --rc geninfo_all_blocks=1 00:21:04.410 --rc geninfo_unexecuted_blocks=1 00:21:04.410 00:21:04.410 ' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.410 --rc genhtml_branch_coverage=1 00:21:04.410 --rc genhtml_function_coverage=1 00:21:04.410 --rc genhtml_legend=1 00:21:04.410 --rc geninfo_all_blocks=1 00:21:04.410 --rc geninfo_unexecuted_blocks=1 00:21:04.410 00:21:04.410 ' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.410 --rc genhtml_branch_coverage=1 00:21:04.410 --rc genhtml_function_coverage=1 00:21:04.410 --rc genhtml_legend=1 00:21:04.410 --rc geninfo_all_blocks=1 00:21:04.410 --rc geninfo_unexecuted_blocks=1 00:21:04.410 00:21:04.410 ' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:04.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.410 --rc genhtml_branch_coverage=1 00:21:04.410 --rc genhtml_function_coverage=1 00:21:04.410 --rc genhtml_legend=1 00:21:04.410 --rc geninfo_all_blocks=1 00:21:04.410 --rc geninfo_unexecuted_blocks=1 00:21:04.410 00:21:04.410 ' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4e244a8963aa4db0b68fec899739c256 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.410 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.411 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.411 07:23:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.977 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:06.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:06.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:06.978 Found net devices under 0000:09:00.0: cvl_0_0 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:06.978 Found net devices under 0000:09:00.1: cvl_0_1 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.978 07:23:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:21:06.978 00:21:06.978 --- 10.0.0.2 ping statistics --- 00:21:06.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.978 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:06.978 00:21:06.978 --- 10.0.0.1 ping statistics --- 00:21:06.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.978 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2559484 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2559484 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2559484 ']' 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.978 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.979 [2024-11-20 07:23:10.151370] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:06.979 [2024-11-20 07:23:10.151465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.979 [2024-11-20 07:23:10.227800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.979 [2024-11-20 07:23:10.287088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.979 [2024-11-20 07:23:10.287151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.979 [2024-11-20 07:23:10.287174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.979 [2024-11-20 07:23:10.287184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.979 [2024-11-20 07:23:10.287194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.979 [2024-11-20 07:23:10.287741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 [2024-11-20 07:23:10.438465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 null0 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4e244a8963aa4db0b68fec899739c256 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.237 [2024-11-20 07:23:10.478735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.237 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.495 nvme0n1 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 [ 00:21:07.496 { 00:21:07.496 "name": "nvme0n1", 00:21:07.496 "aliases": [ 00:21:07.496 "4e244a89-63aa-4db0-b68f-ec899739c256" 00:21:07.496 ], 00:21:07.496 "product_name": "NVMe disk", 00:21:07.496 "block_size": 512, 00:21:07.496 "num_blocks": 2097152, 00:21:07.496 "uuid": "4e244a89-63aa-4db0-b68f-ec899739c256", 00:21:07.496 "numa_id": 0, 00:21:07.496 "assigned_rate_limits": { 00:21:07.496 "rw_ios_per_sec": 0, 00:21:07.496 "rw_mbytes_per_sec": 0, 00:21:07.496 "r_mbytes_per_sec": 0, 00:21:07.496 "w_mbytes_per_sec": 0 00:21:07.496 }, 00:21:07.496 "claimed": false, 00:21:07.496 "zoned": false, 00:21:07.496 "supported_io_types": { 00:21:07.496 "read": true, 00:21:07.496 "write": true, 00:21:07.496 "unmap": false, 00:21:07.496 "flush": true, 00:21:07.496 "reset": true, 00:21:07.496 "nvme_admin": true, 00:21:07.496 "nvme_io": true, 00:21:07.496 "nvme_io_md": false, 00:21:07.496 "write_zeroes": true, 00:21:07.496 "zcopy": false, 00:21:07.496 "get_zone_info": false, 00:21:07.496 "zone_management": false, 00:21:07.496 "zone_append": false, 00:21:07.496 "compare": true, 00:21:07.496 "compare_and_write": true, 00:21:07.496 "abort": true, 00:21:07.496 "seek_hole": false, 00:21:07.496 "seek_data": false, 00:21:07.496 "copy": true, 00:21:07.496 "nvme_iov_md": false 00:21:07.496 }, 00:21:07.496 "memory_domains": [ 00:21:07.496 { 00:21:07.496 "dma_device_id": "system", 00:21:07.496 "dma_device_type": 1 00:21:07.496 } 00:21:07.496 ], 00:21:07.496 "driver_specific": { 00:21:07.496 "nvme": [ 00:21:07.496 { 00:21:07.496 "trid": { 00:21:07.496 "trtype": "TCP", 00:21:07.496 "adrfam": "IPv4", 00:21:07.496 "traddr": "10.0.0.2", 00:21:07.496 "trsvcid": "4420", 00:21:07.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.496 }, 00:21:07.496 "ctrlr_data": { 00:21:07.496 "cntlid": 1, 00:21:07.496 "vendor_id": "0x8086", 00:21:07.496 "model_number": "SPDK bdev Controller", 00:21:07.496 "serial_number": "00000000000000000000", 00:21:07.496 "firmware_revision": "25.01", 00:21:07.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.496 "oacs": { 00:21:07.496 "security": 0, 00:21:07.496 "format": 0, 00:21:07.496 "firmware": 0, 00:21:07.496 "ns_manage": 0 00:21:07.496 }, 00:21:07.496 "multi_ctrlr": true, 00:21:07.496 "ana_reporting": false 00:21:07.496 }, 00:21:07.496 "vs": { 00:21:07.496 "nvme_version": "1.3" 00:21:07.496 }, 00:21:07.496 "ns_data": { 00:21:07.496 "id": 1, 00:21:07.496 "can_share": true 00:21:07.496 } 00:21:07.496 } 00:21:07.496 ], 00:21:07.496 "mp_policy": "active_passive" 00:21:07.496 } 00:21:07.496 } 00:21:07.496 ] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 [2024-11-20 07:23:10.727807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.496 [2024-11-20 07:23:10.727879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadcb20 (9): Bad file descriptor 00:21:07.496 [2024-11-20 07:23:10.860445] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 [ 00:21:07.496 { 00:21:07.496 "name": "nvme0n1", 00:21:07.496 "aliases": [ 00:21:07.496 "4e244a89-63aa-4db0-b68f-ec899739c256" 00:21:07.496 ], 00:21:07.496 "product_name": "NVMe disk", 00:21:07.496 "block_size": 512, 00:21:07.496 "num_blocks": 2097152, 00:21:07.496 "uuid": "4e244a89-63aa-4db0-b68f-ec899739c256", 00:21:07.496 "numa_id": 0, 00:21:07.496 "assigned_rate_limits": { 00:21:07.496 "rw_ios_per_sec": 0, 00:21:07.496 "rw_mbytes_per_sec": 0, 00:21:07.496 "r_mbytes_per_sec": 0, 00:21:07.496 "w_mbytes_per_sec": 0 00:21:07.496 }, 00:21:07.496 "claimed": false, 00:21:07.496 "zoned": false, 00:21:07.496 "supported_io_types": { 00:21:07.496 "read": true, 00:21:07.496 "write": true, 00:21:07.496 "unmap": false, 00:21:07.496 "flush": true, 00:21:07.496 "reset": true, 00:21:07.496 "nvme_admin": true, 00:21:07.496 "nvme_io": true, 00:21:07.496 "nvme_io_md": false, 00:21:07.496 "write_zeroes": true, 00:21:07.496 "zcopy": false, 00:21:07.496 "get_zone_info": false, 00:21:07.496 "zone_management": false, 00:21:07.496 "zone_append": false, 00:21:07.496 "compare": true, 00:21:07.496 "compare_and_write": true, 00:21:07.496 "abort": true, 00:21:07.496 "seek_hole": false, 00:21:07.496 "seek_data": false, 00:21:07.496 "copy": true, 00:21:07.496 "nvme_iov_md": false 00:21:07.496 }, 00:21:07.496 "memory_domains": [ 00:21:07.496 { 00:21:07.496 "dma_device_id": "system", 00:21:07.496 "dma_device_type": 1 00:21:07.496 } 00:21:07.496 ], 00:21:07.496 "driver_specific": { 00:21:07.496 "nvme": [ 00:21:07.496 { 00:21:07.496 "trid": { 00:21:07.496 "trtype": "TCP", 00:21:07.496 "adrfam": "IPv4", 00:21:07.496 "traddr": "10.0.0.2", 00:21:07.496 "trsvcid": "4420", 00:21:07.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.496 }, 00:21:07.496 "ctrlr_data": { 00:21:07.496 "cntlid": 2, 00:21:07.496 "vendor_id": "0x8086", 00:21:07.496 "model_number": "SPDK bdev Controller", 00:21:07.496 "serial_number": "00000000000000000000", 00:21:07.496 "firmware_revision": "25.01", 00:21:07.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.496 "oacs": { 00:21:07.496 "security": 0, 00:21:07.496 "format": 0, 00:21:07.496 "firmware": 0, 00:21:07.496 "ns_manage": 0 00:21:07.496 }, 00:21:07.496 "multi_ctrlr": true, 00:21:07.496 "ana_reporting": false 00:21:07.496 }, 00:21:07.496 "vs": { 00:21:07.496 "nvme_version": "1.3" 00:21:07.496 }, 00:21:07.496 "ns_data": { 00:21:07.496 "id": 1, 00:21:07.496 "can_share": true 00:21:07.496 } 00:21:07.496 } 00:21:07.496 ], 00:21:07.496 "mp_policy": "active_passive" 00:21:07.496 } 00:21:07.496 } 00:21:07.496 ] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xJ16k35acM 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xJ16k35acM 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xJ16k35acM 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.496 [2024-11-20 07:23:10.916431] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.496 [2024-11-20 07:23:10.916590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.496 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.754 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.754 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.754 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.754 07:23:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.754 [2024-11-20 07:23:10.932502] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.754 nvme0n1 00:21:07.754 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.754 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.755 [ 00:21:07.755 { 00:21:07.755 "name": "nvme0n1", 00:21:07.755 "aliases": [ 00:21:07.755 "4e244a89-63aa-4db0-b68f-ec899739c256" 00:21:07.755 ], 00:21:07.755 "product_name": "NVMe disk", 00:21:07.755 "block_size": 512, 00:21:07.755 "num_blocks": 2097152, 00:21:07.755 "uuid": "4e244a89-63aa-4db0-b68f-ec899739c256", 00:21:07.755 "numa_id": 0, 00:21:07.755 "assigned_rate_limits": { 00:21:07.755 "rw_ios_per_sec": 0, 00:21:07.755 "rw_mbytes_per_sec": 0, 00:21:07.755 "r_mbytes_per_sec": 0, 00:21:07.755 "w_mbytes_per_sec": 0 00:21:07.755 }, 00:21:07.755 "claimed": false, 00:21:07.755 "zoned": false, 00:21:07.755 "supported_io_types": { 00:21:07.755 "read": true, 00:21:07.755 "write": true, 00:21:07.755 "unmap": false, 00:21:07.755 "flush": true, 00:21:07.755 "reset": true, 00:21:07.755 "nvme_admin": true, 00:21:07.755 "nvme_io": true, 00:21:07.755 "nvme_io_md": false, 00:21:07.755 "write_zeroes": true, 00:21:07.755 "zcopy": false, 00:21:07.755 "get_zone_info": false, 00:21:07.755 "zone_management": false, 00:21:07.755 "zone_append": false, 00:21:07.755 "compare": true, 00:21:07.755 "compare_and_write": true, 00:21:07.755 "abort": true, 00:21:07.755 "seek_hole": false, 00:21:07.755 "seek_data": false, 00:21:07.755 "copy": true, 00:21:07.755 "nvme_iov_md": false 00:21:07.755 }, 00:21:07.755 "memory_domains": [ 00:21:07.755 { 00:21:07.755 "dma_device_id": "system", 00:21:07.755 "dma_device_type": 1 00:21:07.755 } 00:21:07.755 ], 00:21:07.755 "driver_specific": { 00:21:07.755 "nvme": [ 00:21:07.755 { 00:21:07.755 "trid": { 00:21:07.755 "trtype": "TCP", 00:21:07.755 "adrfam": "IPv4", 00:21:07.755 "traddr": "10.0.0.2", 00:21:07.755 "trsvcid": "4421", 00:21:07.755 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.755 }, 00:21:07.755 "ctrlr_data": { 00:21:07.755 "cntlid": 3, 00:21:07.755 "vendor_id": "0x8086", 00:21:07.755 "model_number": "SPDK bdev Controller", 00:21:07.755 "serial_number": "00000000000000000000", 00:21:07.755 "firmware_revision": "25.01", 00:21:07.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.755 "oacs": { 00:21:07.755 "security": 0, 00:21:07.755 "format": 0, 00:21:07.755 "firmware": 0, 00:21:07.755 "ns_manage": 0 00:21:07.755 }, 00:21:07.755 "multi_ctrlr": true, 00:21:07.755 "ana_reporting": false 00:21:07.755 }, 00:21:07.755 "vs": { 00:21:07.755 "nvme_version": "1.3" 00:21:07.755 }, 00:21:07.755 "ns_data": { 00:21:07.755 "id": 1, 00:21:07.755 "can_share": true 00:21:07.755 } 00:21:07.755 } 00:21:07.755 ], 00:21:07.755 "mp_policy": "active_passive" 00:21:07.755 } 00:21:07.755 } 00:21:07.755 ] 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xJ16k35acM 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.755 rmmod nvme_tcp 00:21:07.755 rmmod nvme_fabrics 00:21:07.755 rmmod nvme_keyring 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2559484 ']' 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2559484 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2559484 ']' 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2559484 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2559484 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2559484' 00:21:07.755 killing process with pid 2559484 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2559484 00:21:07.755 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2559484 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.013 07:23:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.550 00:21:10.550 real 0m5.805s 00:21:10.550 user 0m2.178s 00:21:10.550 sys 0m2.062s 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:10.550 ************************************ 00:21:10.550 END TEST nvmf_async_init 00:21:10.550 ************************************ 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.550 ************************************ 00:21:10.550 START TEST dma 00:21:10.550 ************************************ 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:10.550 * Looking for test storage... 00:21:10.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.550 --rc genhtml_branch_coverage=1 00:21:10.550 --rc genhtml_function_coverage=1 00:21:10.550 --rc genhtml_legend=1 00:21:10.550 --rc geninfo_all_blocks=1 00:21:10.550 --rc geninfo_unexecuted_blocks=1 00:21:10.550 00:21:10.550 ' 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.550 --rc genhtml_branch_coverage=1 00:21:10.550 --rc genhtml_function_coverage=1 00:21:10.550 --rc genhtml_legend=1 00:21:10.550 --rc geninfo_all_blocks=1 00:21:10.550 --rc geninfo_unexecuted_blocks=1 00:21:10.550 00:21:10.550 ' 00:21:10.550 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.551 --rc genhtml_branch_coverage=1 00:21:10.551 --rc genhtml_function_coverage=1 00:21:10.551 --rc genhtml_legend=1 00:21:10.551 --rc geninfo_all_blocks=1 00:21:10.551 --rc geninfo_unexecuted_blocks=1 00:21:10.551 00:21:10.551 ' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.551 --rc genhtml_branch_coverage=1 00:21:10.551 --rc genhtml_function_coverage=1 00:21:10.551 --rc genhtml_legend=1 00:21:10.551 --rc geninfo_all_blocks=1 00:21:10.551 --rc geninfo_unexecuted_blocks=1 00:21:10.551 00:21:10.551 ' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:10.551 00:21:10.551 real 0m0.159s 00:21:10.551 user 0m0.104s 00:21:10.551 sys 0m0.064s 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:10.551 ************************************ 00:21:10.551 END TEST dma 00:21:10.551 ************************************ 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.551 ************************************ 00:21:10.551 START TEST nvmf_identify 00:21:10.551 ************************************ 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:10.551 * Looking for test storage... 00:21:10.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.551 --rc genhtml_branch_coverage=1 00:21:10.551 --rc genhtml_function_coverage=1 00:21:10.551 --rc genhtml_legend=1 00:21:10.551 --rc geninfo_all_blocks=1 00:21:10.551 --rc geninfo_unexecuted_blocks=1 00:21:10.551 00:21:10.551 ' 00:21:10.551 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:10.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.551 --rc genhtml_branch_coverage=1 00:21:10.551 --rc genhtml_function_coverage=1 00:21:10.551 --rc genhtml_legend=1 00:21:10.552 --rc geninfo_all_blocks=1 00:21:10.552 --rc geninfo_unexecuted_blocks=1 00:21:10.552 00:21:10.552 ' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:10.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.552 --rc genhtml_branch_coverage=1 00:21:10.552 --rc genhtml_function_coverage=1 00:21:10.552 --rc genhtml_legend=1 00:21:10.552 --rc geninfo_all_blocks=1 00:21:10.552 --rc geninfo_unexecuted_blocks=1 00:21:10.552 00:21:10.552 ' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:10.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.552 --rc genhtml_branch_coverage=1 00:21:10.552 --rc genhtml_function_coverage=1 00:21:10.552 --rc genhtml_legend=1 00:21:10.552 --rc geninfo_all_blocks=1 00:21:10.552 --rc geninfo_unexecuted_blocks=1 00:21:10.552 00:21:10.552 ' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.552 07:23:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.456 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:12.457 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:12.457 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:12.457 Found net devices under 0000:09:00.0: cvl_0_0 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:12.457 Found net devices under 0000:09:00.1: cvl_0_1 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.457 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:21:12.716 00:21:12.716 --- 10.0.0.2 ping statistics --- 00:21:12.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.716 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:21:12.716 00:21:12.716 --- 10.0.0.1 ping statistics --- 00:21:12.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.716 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2561625 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2561625 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2561625 ']' 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:12.716 07:23:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.716 [2024-11-20 07:23:15.994039] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:12.716 [2024-11-20 07:23:15.994117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.716 [2024-11-20 07:23:16.067604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.716 [2024-11-20 07:23:16.129327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.716 [2024-11-20 07:23:16.129383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.716 [2024-11-20 07:23:16.129397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.716 [2024-11-20 07:23:16.129408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.716 [2024-11-20 07:23:16.129418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.716 [2024-11-20 07:23:16.130997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.716 [2024-11-20 07:23:16.131063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.716 [2024-11-20 07:23:16.131130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.716 [2024-11-20 07:23:16.131134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 [2024-11-20 07:23:16.265129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 Malloc0 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 [2024-11-20 07:23:16.364110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.974 [ 00:21:12.974 { 00:21:12.974 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.974 "subtype": "Discovery", 00:21:12.974 "listen_addresses": [ 00:21:12.974 { 00:21:12.974 "trtype": "TCP", 00:21:12.974 "adrfam": "IPv4", 00:21:12.974 "traddr": "10.0.0.2", 00:21:12.974 "trsvcid": "4420" 00:21:12.974 } 00:21:12.974 ], 00:21:12.974 "allow_any_host": true, 00:21:12.974 "hosts": [] 00:21:12.974 }, 00:21:12.974 { 00:21:12.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.974 "subtype": "NVMe", 00:21:12.974 "listen_addresses": [ 00:21:12.974 { 00:21:12.974 "trtype": "TCP", 00:21:12.974 "adrfam": "IPv4", 00:21:12.974 "traddr": "10.0.0.2", 00:21:12.974 "trsvcid": "4420" 00:21:12.974 } 00:21:12.974 ], 00:21:12.974 "allow_any_host": true, 00:21:12.974 "hosts": [], 00:21:12.974 "serial_number": "SPDK00000000000001", 00:21:12.974 "model_number": "SPDK bdev Controller", 00:21:12.974 "max_namespaces": 32, 00:21:12.974 "min_cntlid": 1, 00:21:12.974 "max_cntlid": 65519, 00:21:12.974 "namespaces": [ 00:21:12.974 { 00:21:12.974 "nsid": 1, 00:21:12.974 "bdev_name": "Malloc0", 00:21:12.974 "name": "Malloc0", 00:21:12.974 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:12.974 "eui64": "ABCDEF0123456789", 00:21:12.974 "uuid": "23cedb15-5eb6-4b1e-aefb-d1b8cd35e385" 00:21:12.974 } 00:21:12.974 ] 00:21:12.974 } 00:21:12.974 ] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.974 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:13.235 [2024-11-20 07:23:16.406572] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:13.235 [2024-11-20 07:23:16.406616] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561654 ] 00:21:13.235 [2024-11-20 07:23:16.457488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:13.235 [2024-11-20 07:23:16.457558] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:13.235 [2024-11-20 07:23:16.457569] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:13.235 [2024-11-20 07:23:16.457606] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:13.235 [2024-11-20 07:23:16.457624] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:13.235 [2024-11-20 07:23:16.461810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:13.235 [2024-11-20 07:23:16.461876] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc92690 0 00:21:13.235 [2024-11-20 07:23:16.462007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:13.235 [2024-11-20 07:23:16.462027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:13.235 [2024-11-20 07:23:16.462037] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:13.235 [2024-11-20 07:23:16.462043] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:13.235 [2024-11-20 07:23:16.462095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.462110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.462119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.462139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:13.235 [2024-11-20 07:23:16.462165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.469319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.469342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.469351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.469380] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:13.235 [2024-11-20 07:23:16.469393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:13.235 [2024-11-20 07:23:16.469404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:13.235 [2024-11-20 07:23:16.469427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.469453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.235 [2024-11-20 07:23:16.469477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.469599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.469612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.469619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.469636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:13.235 [2024-11-20 07:23:16.469648] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:13.235 [2024-11-20 07:23:16.469662] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.469686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.235 [2024-11-20 07:23:16.469707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.469789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.469801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.469807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.469825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:13.235 [2024-11-20 07:23:16.469839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:13.235 [2024-11-20 07:23:16.469851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.469865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.469875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.235 [2024-11-20 07:23:16.469896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.469974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.469992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.469999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.470015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:13.235 [2024-11-20 07:23:16.470031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.470057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.235 [2024-11-20 07:23:16.470077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.470160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.470173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.470180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.470196] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:13.235 [2024-11-20 07:23:16.470205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:13.235 [2024-11-20 07:23:16.470218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:13.235 [2024-11-20 07:23:16.470329] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:13.235 [2024-11-20 07:23:16.470339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:13.235 [2024-11-20 07:23:16.470356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.470381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.235 [2024-11-20 07:23:16.470402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.470486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.470498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.470505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.470521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:13.235 [2024-11-20 07:23:16.470537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.235 [2024-11-20 07:23:16.470562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.235 [2024-11-20 07:23:16.470583] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.235 [2024-11-20 07:23:16.470666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.235 [2024-11-20 07:23:16.470680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.235 [2024-11-20 07:23:16.470687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.235 [2024-11-20 07:23:16.470694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.235 [2024-11-20 07:23:16.470701] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:13.235 [2024-11-20 07:23:16.470709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:13.235 [2024-11-20 07:23:16.470723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:13.235 [2024-11-20 07:23:16.470738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:13.236 [2024-11-20 07:23:16.470756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.470764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.470774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.236 [2024-11-20 07:23:16.470795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.236 [2024-11-20 07:23:16.470917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.236 [2024-11-20 07:23:16.470932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.236 [2024-11-20 07:23:16.470939] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.470946] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc92690): datao=0, datal=4096, cccid=0 00:21:13.236 [2024-11-20 07:23:16.470954] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf4100) on tqpair(0xc92690): expected_datao=0, payload_size=4096 00:21:13.236 [2024-11-20 07:23:16.470963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.470975] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.470984] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.470996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.236 [2024-11-20 07:23:16.471006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.236 [2024-11-20 07:23:16.471013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.236 [2024-11-20 07:23:16.471033] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:13.236 [2024-11-20 07:23:16.471042] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:13.236 [2024-11-20 07:23:16.471049] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:13.236 [2024-11-20 07:23:16.471065] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:13.236 [2024-11-20 07:23:16.471075] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:13.236 [2024-11-20 07:23:16.471083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:13.236 [2024-11-20 07:23:16.471102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:13.236 [2024-11-20 07:23:16.471119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.236 [2024-11-20 07:23:16.471165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.236 [2024-11-20 07:23:16.471254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.236 [2024-11-20 07:23:16.471266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.236 [2024-11-20 07:23:16.471273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.236 [2024-11-20 07:23:16.471292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.236 [2024-11-20 07:23:16.471335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.236 [2024-11-20 07:23:16.471366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.236 [2024-11-20 07:23:16.471396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.236 [2024-11-20 07:23:16.471426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:13.236 [2024-11-20 07:23:16.471441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:13.236 [2024-11-20 07:23:16.471453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.236 [2024-11-20 07:23:16.471492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4100, cid 0, qid 0 00:21:13.236 [2024-11-20 07:23:16.471504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4280, cid 1, qid 0 00:21:13.236 [2024-11-20 07:23:16.471512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4400, cid 2, qid 0 00:21:13.236 [2024-11-20 07:23:16.471519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.236 [2024-11-20 07:23:16.471531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4700, cid 4, qid 0 00:21:13.236 [2024-11-20 07:23:16.471639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.236 [2024-11-20 07:23:16.471653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.236 [2024-11-20 07:23:16.471660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4700) on tqpair=0xc92690 00:21:13.236 [2024-11-20 07:23:16.471682] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:13.236 [2024-11-20 07:23:16.471692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:13.236 [2024-11-20 07:23:16.471710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.471730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.236 [2024-11-20 07:23:16.471751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4700, cid 4, qid 0 00:21:13.236 [2024-11-20 07:23:16.471852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.236 [2024-11-20 07:23:16.471865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.236 [2024-11-20 07:23:16.471871] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471877] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc92690): datao=0, datal=4096, cccid=4 00:21:13.236 [2024-11-20 07:23:16.471885] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf4700) on tqpair(0xc92690): expected_datao=0, payload_size=4096 00:21:13.236 [2024-11-20 07:23:16.471892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471907] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.471916] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.236 [2024-11-20 07:23:16.515357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.236 [2024-11-20 07:23:16.515365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4700) on tqpair=0xc92690 00:21:13.236 [2024-11-20 07:23:16.515395] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:13.236 [2024-11-20 07:23:16.515436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.515459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.236 [2024-11-20 07:23:16.515471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc92690) 00:21:13.236 [2024-11-20 07:23:16.515493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.236 [2024-11-20 07:23:16.515522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4700, cid 4, qid 0 00:21:13.236 [2024-11-20 07:23:16.515551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4880, cid 5, qid 0 00:21:13.236 [2024-11-20 07:23:16.515688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.236 [2024-11-20 07:23:16.515701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.236 [2024-11-20 07:23:16.515713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515720] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc92690): datao=0, datal=1024, cccid=4 00:21:13.236 [2024-11-20 07:23:16.515728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf4700) on tqpair(0xc92690): expected_datao=0, payload_size=1024 00:21:13.236 [2024-11-20 07:23:16.515735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515745] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515752] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.236 [2024-11-20 07:23:16.515761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.236 [2024-11-20 07:23:16.515770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.237 [2024-11-20 07:23:16.515776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.515783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4880) on tqpair=0xc92690 00:21:13.237 [2024-11-20 07:23:16.556417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.237 [2024-11-20 07:23:16.556438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.237 [2024-11-20 07:23:16.556446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4700) on tqpair=0xc92690 00:21:13.237 [2024-11-20 07:23:16.556473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc92690) 00:21:13.237 [2024-11-20 07:23:16.556494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.237 [2024-11-20 07:23:16.556526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4700, cid 4, qid 0 00:21:13.237 [2024-11-20 07:23:16.556640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.237 [2024-11-20 07:23:16.556655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.237 [2024-11-20 07:23:16.556662] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556669] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc92690): datao=0, datal=3072, cccid=4 00:21:13.237 [2024-11-20 07:23:16.556677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf4700) on tqpair(0xc92690): expected_datao=0, payload_size=3072 00:21:13.237 [2024-11-20 07:23:16.556684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556695] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556702] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.237 [2024-11-20 07:23:16.556724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.237 [2024-11-20 07:23:16.556730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4700) on tqpair=0xc92690 00:21:13.237 [2024-11-20 07:23:16.556752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc92690) 00:21:13.237 [2024-11-20 07:23:16.556772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.237 [2024-11-20 07:23:16.556800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4700, cid 4, qid 0 00:21:13.237 [2024-11-20 07:23:16.556896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.237 [2024-11-20 07:23:16.556908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.237 [2024-11-20 07:23:16.556915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc92690): datao=0, datal=8, cccid=4 00:21:13.237 [2024-11-20 07:23:16.556934] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf4700) on tqpair(0xc92690): expected_datao=0, payload_size=8 00:21:13.237 [2024-11-20 07:23:16.556942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556951] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.556959] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.601326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.237 [2024-11-20 07:23:16.601346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.237 [2024-11-20 07:23:16.601369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.237 [2024-11-20 07:23:16.601377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4700) on tqpair=0xc92690 00:21:13.237 ===================================================== 00:21:13.237 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:13.237 ===================================================== 00:21:13.237 Controller Capabilities/Features 00:21:13.237 ================================ 00:21:13.237 Vendor ID: 0000 00:21:13.237 Subsystem Vendor ID: 0000 00:21:13.237 Serial Number: .................... 00:21:13.237 Model Number: ........................................ 00:21:13.237 Firmware Version: 25.01 00:21:13.237 Recommended Arb Burst: 0 00:21:13.237 IEEE OUI Identifier: 00 00 00 00:21:13.237 Multi-path I/O 00:21:13.237 May have multiple subsystem ports: No 00:21:13.237 May have multiple controllers: No 00:21:13.237 Associated with SR-IOV VF: No 00:21:13.237 Max Data Transfer Size: 131072 00:21:13.237 Max Number of Namespaces: 0 00:21:13.237 Max Number of I/O Queues: 1024 00:21:13.237 NVMe Specification Version (VS): 1.3 00:21:13.237 NVMe Specification Version (Identify): 1.3 00:21:13.237 Maximum Queue Entries: 128 00:21:13.237 Contiguous Queues Required: Yes 00:21:13.237 Arbitration Mechanisms Supported 00:21:13.237 Weighted Round Robin: Not Supported 00:21:13.237 Vendor Specific: Not Supported 00:21:13.237 Reset Timeout: 15000 ms 00:21:13.237 Doorbell Stride: 4 bytes 00:21:13.237 NVM Subsystem Reset: Not Supported 00:21:13.237 Command Sets Supported 00:21:13.237 NVM Command Set: Supported 00:21:13.237 Boot Partition: Not Supported 00:21:13.237 Memory Page Size Minimum: 4096 bytes 00:21:13.237 Memory Page Size Maximum: 4096 bytes 00:21:13.237 Persistent Memory Region: Not Supported 00:21:13.237 Optional Asynchronous Events Supported 00:21:13.237 Namespace Attribute Notices: Not Supported 00:21:13.237 Firmware Activation Notices: Not Supported 00:21:13.237 ANA Change Notices: Not Supported 00:21:13.237 PLE Aggregate Log Change Notices: Not Supported 00:21:13.237 LBA Status Info Alert Notices: Not Supported 00:21:13.237 EGE Aggregate Log Change Notices: Not Supported 00:21:13.237 Normal NVM Subsystem Shutdown event: Not Supported 00:21:13.237 Zone Descriptor Change Notices: Not Supported 00:21:13.237 Discovery Log Change Notices: Supported 00:21:13.237 Controller Attributes 00:21:13.237 128-bit Host Identifier: Not Supported 00:21:13.237 Non-Operational Permissive Mode: Not Supported 00:21:13.237 NVM Sets: Not Supported 00:21:13.237 Read Recovery Levels: Not Supported 00:21:13.237 Endurance Groups: Not Supported 00:21:13.237 Predictable Latency Mode: Not Supported 00:21:13.237 Traffic Based Keep ALive: Not Supported 00:21:13.237 Namespace Granularity: Not Supported 00:21:13.237 SQ Associations: Not Supported 00:21:13.237 UUID List: Not Supported 00:21:13.237 Multi-Domain Subsystem: Not Supported 00:21:13.237 Fixed Capacity Management: Not Supported 00:21:13.237 Variable Capacity Management: Not Supported 00:21:13.237 Delete Endurance Group: Not Supported 00:21:13.237 Delete NVM Set: Not Supported 00:21:13.237 Extended LBA Formats Supported: Not Supported 00:21:13.237 Flexible Data Placement Supported: Not Supported 00:21:13.237 00:21:13.237 Controller Memory Buffer Support 00:21:13.237 ================================ 00:21:13.237 Supported: No 00:21:13.237 00:21:13.237 Persistent Memory Region Support 00:21:13.237 ================================ 00:21:13.237 Supported: No 00:21:13.237 00:21:13.237 Admin Command Set Attributes 00:21:13.237 ============================ 00:21:13.237 Security Send/Receive: Not Supported 00:21:13.237 Format NVM: Not Supported 00:21:13.237 Firmware Activate/Download: Not Supported 00:21:13.237 Namespace Management: Not Supported 00:21:13.237 Device Self-Test: Not Supported 00:21:13.237 Directives: Not Supported 00:21:13.237 NVMe-MI: Not Supported 00:21:13.237 Virtualization Management: Not Supported 00:21:13.237 Doorbell Buffer Config: Not Supported 00:21:13.237 Get LBA Status Capability: Not Supported 00:21:13.237 Command & Feature Lockdown Capability: Not Supported 00:21:13.237 Abort Command Limit: 1 00:21:13.237 Async Event Request Limit: 4 00:21:13.237 Number of Firmware Slots: N/A 00:21:13.237 Firmware Slot 1 Read-Only: N/A 00:21:13.237 Firmware Activation Without Reset: N/A 00:21:13.237 Multiple Update Detection Support: N/A 00:21:13.237 Firmware Update Granularity: No Information Provided 00:21:13.237 Per-Namespace SMART Log: No 00:21:13.237 Asymmetric Namespace Access Log Page: Not Supported 00:21:13.237 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:13.237 Command Effects Log Page: Not Supported 00:21:13.237 Get Log Page Extended Data: Supported 00:21:13.237 Telemetry Log Pages: Not Supported 00:21:13.237 Persistent Event Log Pages: Not Supported 00:21:13.237 Supported Log Pages Log Page: May Support 00:21:13.237 Commands Supported & Effects Log Page: Not Supported 00:21:13.237 Feature Identifiers & Effects Log Page:May Support 00:21:13.237 NVMe-MI Commands & Effects Log Page: May Support 00:21:13.237 Data Area 4 for Telemetry Log: Not Supported 00:21:13.237 Error Log Page Entries Supported: 128 00:21:13.237 Keep Alive: Not Supported 00:21:13.237 00:21:13.237 NVM Command Set Attributes 00:21:13.237 ========================== 00:21:13.237 Submission Queue Entry Size 00:21:13.237 Max: 1 00:21:13.237 Min: 1 00:21:13.237 Completion Queue Entry Size 00:21:13.237 Max: 1 00:21:13.237 Min: 1 00:21:13.237 Number of Namespaces: 0 00:21:13.237 Compare Command: Not Supported 00:21:13.237 Write Uncorrectable Command: Not Supported 00:21:13.237 Dataset Management Command: Not Supported 00:21:13.237 Write Zeroes Command: Not Supported 00:21:13.237 Set Features Save Field: Not Supported 00:21:13.237 Reservations: Not Supported 00:21:13.237 Timestamp: Not Supported 00:21:13.238 Copy: Not Supported 00:21:13.238 Volatile Write Cache: Not Present 00:21:13.238 Atomic Write Unit (Normal): 1 00:21:13.238 Atomic Write Unit (PFail): 1 00:21:13.238 Atomic Compare & Write Unit: 1 00:21:13.238 Fused Compare & Write: Supported 00:21:13.238 Scatter-Gather List 00:21:13.238 SGL Command Set: Supported 00:21:13.238 SGL Keyed: Supported 00:21:13.238 SGL Bit Bucket Descriptor: Not Supported 00:21:13.238 SGL Metadata Pointer: Not Supported 00:21:13.238 Oversized SGL: Not Supported 00:21:13.238 SGL Metadata Address: Not Supported 00:21:13.238 SGL Offset: Supported 00:21:13.238 Transport SGL Data Block: Not Supported 00:21:13.238 Replay Protected Memory Block: Not Supported 00:21:13.238 00:21:13.238 Firmware Slot Information 00:21:13.238 ========================= 00:21:13.238 Active slot: 0 00:21:13.238 00:21:13.238 00:21:13.238 Error Log 00:21:13.238 ========= 00:21:13.238 00:21:13.238 Active Namespaces 00:21:13.238 ================= 00:21:13.238 Discovery Log Page 00:21:13.238 ================== 00:21:13.238 Generation Counter: 2 00:21:13.238 Number of Records: 2 00:21:13.238 Record Format: 0 00:21:13.238 00:21:13.238 Discovery Log Entry 0 00:21:13.238 ---------------------- 00:21:13.238 Transport Type: 3 (TCP) 00:21:13.238 Address Family: 1 (IPv4) 00:21:13.238 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:13.238 Entry Flags: 00:21:13.238 Duplicate Returned Information: 1 00:21:13.238 Explicit Persistent Connection Support for Discovery: 1 00:21:13.238 Transport Requirements: 00:21:13.238 Secure Channel: Not Required 00:21:13.238 Port ID: 0 (0x0000) 00:21:13.238 Controller ID: 65535 (0xffff) 00:21:13.238 Admin Max SQ Size: 128 00:21:13.238 Transport Service Identifier: 4420 00:21:13.238 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:13.238 Transport Address: 10.0.0.2 00:21:13.238 Discovery Log Entry 1 00:21:13.238 ---------------------- 00:21:13.238 Transport Type: 3 (TCP) 00:21:13.238 Address Family: 1 (IPv4) 00:21:13.238 Subsystem Type: 2 (NVM Subsystem) 00:21:13.238 Entry Flags: 00:21:13.238 Duplicate Returned Information: 0 00:21:13.238 Explicit Persistent Connection Support for Discovery: 0 00:21:13.238 Transport Requirements: 00:21:13.238 Secure Channel: Not Required 00:21:13.238 Port ID: 0 (0x0000) 00:21:13.238 Controller ID: 65535 (0xffff) 00:21:13.238 Admin Max SQ Size: 128 00:21:13.238 Transport Service Identifier: 4420 00:21:13.238 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:13.238 Transport Address: 10.0.0.2 [2024-11-20 07:23:16.601501] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:13.238 [2024-11-20 07:23:16.601524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4100) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.601538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.238 [2024-11-20 07:23:16.601549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4280) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.601557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.238 [2024-11-20 07:23:16.601567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4400) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.601575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.238 [2024-11-20 07:23:16.601584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.601591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.238 [2024-11-20 07:23:16.601610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.601619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.601626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.238 [2024-11-20 07:23:16.601652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.238 [2024-11-20 07:23:16.601678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.238 [2024-11-20 07:23:16.601802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.238 [2024-11-20 07:23:16.601815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.238 [2024-11-20 07:23:16.601822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.601829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.601841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.601849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.601855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.238 [2024-11-20 07:23:16.601866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.238 [2024-11-20 07:23:16.601892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.238 [2024-11-20 07:23:16.601991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.238 [2024-11-20 07:23:16.602006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.238 [2024-11-20 07:23:16.602017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.602035] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:13.238 [2024-11-20 07:23:16.602043] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:13.238 [2024-11-20 07:23:16.602060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.238 [2024-11-20 07:23:16.602087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.238 [2024-11-20 07:23:16.602108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.238 [2024-11-20 07:23:16.602183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.238 [2024-11-20 07:23:16.602198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.238 [2024-11-20 07:23:16.602205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.602230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.238 [2024-11-20 07:23:16.602257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.238 [2024-11-20 07:23:16.602278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.238 [2024-11-20 07:23:16.602367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.238 [2024-11-20 07:23:16.602382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.238 [2024-11-20 07:23:16.602389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.602412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.238 [2024-11-20 07:23:16.602438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.238 [2024-11-20 07:23:16.602460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.238 [2024-11-20 07:23:16.602551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.238 [2024-11-20 07:23:16.602563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.238 [2024-11-20 07:23:16.602570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.602592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.238 [2024-11-20 07:23:16.602618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.238 [2024-11-20 07:23:16.602638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.238 [2024-11-20 07:23:16.602709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.238 [2024-11-20 07:23:16.602722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.238 [2024-11-20 07:23:16.602729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.238 [2024-11-20 07:23:16.602751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.238 [2024-11-20 07:23:16.602767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.602777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.602797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.602887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.602899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.602906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.602912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.602928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.602937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.602944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.602954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.602974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.603047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.603059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.603066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.603088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.603113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.603133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.603203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.603215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.603222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.603245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.603270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.603291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.603387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.603402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.603413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.603437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.603463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.603484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.603560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.603578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.603585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.603608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.603633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.603654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.603727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.603740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.603747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.603769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.603795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.603814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.603891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.603905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.603912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.603935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.603950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.603960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.603981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.604061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.604075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.604082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.604109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.604135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.604156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.604230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.604242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.604249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.604272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.604297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.239 [2024-11-20 07:23:16.604326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.239 [2024-11-20 07:23:16.604421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.239 [2024-11-20 07:23:16.604435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.239 [2024-11-20 07:23:16.604442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.239 [2024-11-20 07:23:16.604465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.239 [2024-11-20 07:23:16.604480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.239 [2024-11-20 07:23:16.604490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.240 [2024-11-20 07:23:16.604511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.240 [2024-11-20 07:23:16.604597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.240 [2024-11-20 07:23:16.604611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.240 [2024-11-20 07:23:16.604618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.240 [2024-11-20 07:23:16.604640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.240 [2024-11-20 07:23:16.604666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.240 [2024-11-20 07:23:16.604687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.240 [2024-11-20 07:23:16.604770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.240 [2024-11-20 07:23:16.604783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.240 [2024-11-20 07:23:16.604790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.240 [2024-11-20 07:23:16.604817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.240 [2024-11-20 07:23:16.604844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.240 [2024-11-20 07:23:16.604865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.240 [2024-11-20 07:23:16.604958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.240 [2024-11-20 07:23:16.604971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.240 [2024-11-20 07:23:16.604978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.604985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.240 [2024-11-20 07:23:16.605001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.605009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.605016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.240 [2024-11-20 07:23:16.605026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.240 [2024-11-20 07:23:16.605047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.240 [2024-11-20 07:23:16.605125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.240 [2024-11-20 07:23:16.605138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.240 [2024-11-20 07:23:16.605145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.605152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.240 [2024-11-20 07:23:16.605168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.605177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.605183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.240 [2024-11-20 07:23:16.605194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.240 [2024-11-20 07:23:16.605214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.240 [2024-11-20 07:23:16.605290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.240 [2024-11-20 07:23:16.609326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.240 [2024-11-20 07:23:16.609350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.609357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.240 [2024-11-20 07:23:16.609376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.609385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.609391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc92690) 00:21:13.240 [2024-11-20 07:23:16.609402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.240 [2024-11-20 07:23:16.609423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf4580, cid 3, qid 0 00:21:13.240 [2024-11-20 07:23:16.609531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.240 [2024-11-20 07:23:16.609545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.240 [2024-11-20 07:23:16.609552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.240 [2024-11-20 07:23:16.609559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf4580) on tqpair=0xc92690 00:21:13.240 [2024-11-20 07:23:16.609572] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:13.240 00:21:13.240 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:13.240 [2024-11-20 07:23:16.648224] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:13.240 [2024-11-20 07:23:16.648277] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561741 ] 00:21:13.503 [2024-11-20 07:23:16.701798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:13.503 [2024-11-20 07:23:16.701850] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:13.503 [2024-11-20 07:23:16.701860] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:13.503 [2024-11-20 07:23:16.701878] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:13.503 [2024-11-20 07:23:16.701892] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:13.503 [2024-11-20 07:23:16.702349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:13.503 [2024-11-20 07:23:16.702388] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1607690 0 00:21:13.503 [2024-11-20 07:23:16.712319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:13.503 [2024-11-20 07:23:16.712339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:13.503 [2024-11-20 07:23:16.712346] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:13.503 [2024-11-20 07:23:16.712352] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:13.503 [2024-11-20 07:23:16.712401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.712413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.712420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.712434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:13.503 [2024-11-20 07:23:16.712461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.720318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.720335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.720343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.720366] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:13.503 [2024-11-20 07:23:16.720394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:13.503 [2024-11-20 07:23:16.720403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:13.503 [2024-11-20 07:23:16.720421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.720449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.720477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.720567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.720582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.720589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.720604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:13.503 [2024-11-20 07:23:16.720617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:13.503 [2024-11-20 07:23:16.720629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.720654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.720675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.720761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.720773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.720780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.720795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:13.503 [2024-11-20 07:23:16.720808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:13.503 [2024-11-20 07:23:16.720820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.720844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.720865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.720947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.720959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.720966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.720973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.720981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:13.503 [2024-11-20 07:23:16.720997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.721022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.721042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.721114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.721126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.721137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.721152] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:13.503 [2024-11-20 07:23:16.721161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:13.503 [2024-11-20 07:23:16.721173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:13.503 [2024-11-20 07:23:16.721283] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:13.503 [2024-11-20 07:23:16.721291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:13.503 [2024-11-20 07:23:16.721314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.721341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.721363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.721443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.721456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.721463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.721478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:13.503 [2024-11-20 07:23:16.721494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.721520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.721541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.721614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.721626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.721632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.721646] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:13.503 [2024-11-20 07:23:16.721655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:13.503 [2024-11-20 07:23:16.721667] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:13.503 [2024-11-20 07:23:16.721686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:13.503 [2024-11-20 07:23:16.721700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.503 [2024-11-20 07:23:16.721721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.503 [2024-11-20 07:23:16.721743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.503 [2024-11-20 07:23:16.721864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.503 [2024-11-20 07:23:16.721879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.503 [2024-11-20 07:23:16.721886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721892] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=4096, cccid=0 00:21:13.503 [2024-11-20 07:23:16.721900] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669100) on tqpair(0x1607690): expected_datao=0, payload_size=4096 00:21:13.503 [2024-11-20 07:23:16.721907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721917] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721925] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.503 [2024-11-20 07:23:16.721946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.503 [2024-11-20 07:23:16.721952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.721959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.503 [2024-11-20 07:23:16.721970] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:13.503 [2024-11-20 07:23:16.721978] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:13.503 [2024-11-20 07:23:16.721985] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:13.503 [2024-11-20 07:23:16.721997] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:13.503 [2024-11-20 07:23:16.722005] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:13.503 [2024-11-20 07:23:16.722013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:13.503 [2024-11-20 07:23:16.722033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:13.503 [2024-11-20 07:23:16.722046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.503 [2024-11-20 07:23:16.722053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.504 [2024-11-20 07:23:16.722092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.504 [2024-11-20 07:23:16.722174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.504 [2024-11-20 07:23:16.722186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.504 [2024-11-20 07:23:16.722193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.504 [2024-11-20 07:23:16.722209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.504 [2024-11-20 07:23:16.722247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.504 [2024-11-20 07:23:16.722279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.504 [2024-11-20 07:23:16.722320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.504 [2024-11-20 07:23:16.722351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.504 [2024-11-20 07:23:16.722416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669100, cid 0, qid 0 00:21:13.504 [2024-11-20 07:23:16.722428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669280, cid 1, qid 0 00:21:13.504 [2024-11-20 07:23:16.722435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669400, cid 2, qid 0 00:21:13.504 [2024-11-20 07:23:16.722443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.504 [2024-11-20 07:23:16.722450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.504 [2024-11-20 07:23:16.722555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.504 [2024-11-20 07:23:16.722566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.504 [2024-11-20 07:23:16.722573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.504 [2024-11-20 07:23:16.722592] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:13.504 [2024-11-20 07:23:16.722601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.504 [2024-11-20 07:23:16.722686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.504 [2024-11-20 07:23:16.722758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.504 [2024-11-20 07:23:16.722770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.504 [2024-11-20 07:23:16.722777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.504 [2024-11-20 07:23:16.722854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.722890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.722898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.722908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.504 [2024-11-20 07:23:16.722929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.504 [2024-11-20 07:23:16.723021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.504 [2024-11-20 07:23:16.723035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.504 [2024-11-20 07:23:16.723042] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723048] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=4096, cccid=4 00:21:13.504 [2024-11-20 07:23:16.723056] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669700) on tqpair(0x1607690): expected_datao=0, payload_size=4096 00:21:13.504 [2024-11-20 07:23:16.723063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723089] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.504 [2024-11-20 07:23:16.723141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.504 [2024-11-20 07:23:16.723148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.504 [2024-11-20 07:23:16.723173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:13.504 [2024-11-20 07:23:16.723191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.723210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.723223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.723241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.504 [2024-11-20 07:23:16.723262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.504 [2024-11-20 07:23:16.723371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.504 [2024-11-20 07:23:16.723389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.504 [2024-11-20 07:23:16.723397] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723403] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=4096, cccid=4 00:21:13.504 [2024-11-20 07:23:16.723411] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669700) on tqpair(0x1607690): expected_datao=0, payload_size=4096 00:21:13.504 [2024-11-20 07:23:16.723418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723434] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723443] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.504 [2024-11-20 07:23:16.723463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.504 [2024-11-20 07:23:16.723470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.504 [2024-11-20 07:23:16.723501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.723520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:13.504 [2024-11-20 07:23:16.723534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.504 [2024-11-20 07:23:16.723552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.504 [2024-11-20 07:23:16.723573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.504 [2024-11-20 07:23:16.723662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.504 [2024-11-20 07:23:16.723674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.504 [2024-11-20 07:23:16.723681] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.504 [2024-11-20 07:23:16.723687] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=4096, cccid=4 00:21:13.504 [2024-11-20 07:23:16.723694] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669700) on tqpair(0x1607690): expected_datao=0, payload_size=4096 00:21:13.504 [2024-11-20 07:23:16.723702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723725] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.505 [2024-11-20 07:23:16.723746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.505 [2024-11-20 07:23:16.723752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.505 [2024-11-20 07:23:16.723774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723849] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:13.505 [2024-11-20 07:23:16.723857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:13.505 [2024-11-20 07:23:16.723866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:13.505 [2024-11-20 07:23:16.723885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.723904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.723915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.723928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.723937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.505 [2024-11-20 07:23:16.723962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.505 [2024-11-20 07:23:16.723989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669880, cid 5, qid 0 00:21:13.505 [2024-11-20 07:23:16.724098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.505 [2024-11-20 07:23:16.724112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.505 [2024-11-20 07:23:16.724119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.724126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.505 [2024-11-20 07:23:16.724136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.505 [2024-11-20 07:23:16.724145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.505 [2024-11-20 07:23:16.724152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.724158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669880) on tqpair=0x1607690 00:21:13.505 [2024-11-20 07:23:16.724173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.724182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.724192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.724212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669880, cid 5, qid 0 00:21:13.505 [2024-11-20 07:23:16.724294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.505 [2024-11-20 07:23:16.728317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.505 [2024-11-20 07:23:16.728329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669880) on tqpair=0x1607690 00:21:13.505 [2024-11-20 07:23:16.728354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.728374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.728396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669880, cid 5, qid 0 00:21:13.505 [2024-11-20 07:23:16.728490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.505 [2024-11-20 07:23:16.728504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.505 [2024-11-20 07:23:16.728511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669880) on tqpair=0x1607690 00:21:13.505 [2024-11-20 07:23:16.728534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.728552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.728573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669880, cid 5, qid 0 00:21:13.505 [2024-11-20 07:23:16.728644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.505 [2024-11-20 07:23:16.728656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.505 [2024-11-20 07:23:16.728662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669880) on tqpair=0x1607690 00:21:13.505 [2024-11-20 07:23:16.728693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.728714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.728726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.728743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.728754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.728771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.728782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.728790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1607690) 00:21:13.505 [2024-11-20 07:23:16.728799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.505 [2024-11-20 07:23:16.728820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669880, cid 5, qid 0 00:21:13.505 [2024-11-20 07:23:16.728831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669700, cid 4, qid 0 00:21:13.505 [2024-11-20 07:23:16.728839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669a00, cid 6, qid 0 00:21:13.505 [2024-11-20 07:23:16.728847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669b80, cid 7, qid 0 00:21:13.505 [2024-11-20 07:23:16.728998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.505 [2024-11-20 07:23:16.729010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.505 [2024-11-20 07:23:16.729016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=8192, cccid=5 00:21:13.505 [2024-11-20 07:23:16.729030] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669880) on tqpair(0x1607690): expected_datao=0, payload_size=8192 00:21:13.505 [2024-11-20 07:23:16.729041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729059] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.505 [2024-11-20 07:23:16.729091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.505 [2024-11-20 07:23:16.729097] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729103] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=512, cccid=4 00:21:13.505 [2024-11-20 07:23:16.729110] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669700) on tqpair(0x1607690): expected_datao=0, payload_size=512 00:21:13.505 [2024-11-20 07:23:16.729117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729126] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729133] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.505 [2024-11-20 07:23:16.729150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.505 [2024-11-20 07:23:16.729157] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729163] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=512, cccid=6 00:21:13.505 [2024-11-20 07:23:16.729170] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669a00) on tqpair(0x1607690): expected_datao=0, payload_size=512 00:21:13.505 [2024-11-20 07:23:16.729177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729186] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729193] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.505 [2024-11-20 07:23:16.729209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.505 [2024-11-20 07:23:16.729216] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.505 [2024-11-20 07:23:16.729222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1607690): datao=0, datal=4096, cccid=7 00:21:13.506 [2024-11-20 07:23:16.729229] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1669b80) on tqpair(0x1607690): expected_datao=0, payload_size=4096 00:21:13.506 [2024-11-20 07:23:16.729236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729245] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729253] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.506 [2024-11-20 07:23:16.729273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.506 [2024-11-20 07:23:16.729279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669880) on tqpair=0x1607690 00:21:13.506 [2024-11-20 07:23:16.729314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.506 [2024-11-20 07:23:16.729327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.506 [2024-11-20 07:23:16.729334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669700) on tqpair=0x1607690 00:21:13.506 [2024-11-20 07:23:16.729356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.506 [2024-11-20 07:23:16.729367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.506 [2024-11-20 07:23:16.729373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669a00) on tqpair=0x1607690 00:21:13.506 [2024-11-20 07:23:16.729393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.506 [2024-11-20 07:23:16.729403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.506 [2024-11-20 07:23:16.729410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.506 [2024-11-20 07:23:16.729416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669b80) on tqpair=0x1607690 00:21:13.506 ===================================================== 00:21:13.506 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.506 ===================================================== 00:21:13.506 Controller Capabilities/Features 00:21:13.506 ================================ 00:21:13.506 Vendor ID: 8086 00:21:13.506 Subsystem Vendor ID: 8086 00:21:13.506 Serial Number: SPDK00000000000001 00:21:13.506 Model Number: SPDK bdev Controller 00:21:13.506 Firmware Version: 25.01 00:21:13.506 Recommended Arb Burst: 6 00:21:13.506 IEEE OUI Identifier: e4 d2 5c 00:21:13.506 Multi-path I/O 00:21:13.506 May have multiple subsystem ports: Yes 00:21:13.506 May have multiple controllers: Yes 00:21:13.506 Associated with SR-IOV VF: No 00:21:13.506 Max Data Transfer Size: 131072 00:21:13.506 Max Number of Namespaces: 32 00:21:13.506 Max Number of I/O Queues: 127 00:21:13.506 NVMe Specification Version (VS): 1.3 00:21:13.506 NVMe Specification Version (Identify): 1.3 00:21:13.506 Maximum Queue Entries: 128 00:21:13.506 Contiguous Queues Required: Yes 00:21:13.506 Arbitration Mechanisms Supported 00:21:13.506 Weighted Round Robin: Not Supported 00:21:13.506 Vendor Specific: Not Supported 00:21:13.506 Reset Timeout: 15000 ms 00:21:13.506 Doorbell Stride: 4 bytes 00:21:13.506 NVM Subsystem Reset: Not Supported 00:21:13.506 Command Sets Supported 00:21:13.506 NVM Command Set: Supported 00:21:13.506 Boot Partition: Not Supported 00:21:13.506 Memory Page Size Minimum: 4096 bytes 00:21:13.506 Memory Page Size Maximum: 4096 bytes 00:21:13.506 Persistent Memory Region: Not Supported 00:21:13.506 Optional Asynchronous Events Supported 00:21:13.506 Namespace Attribute Notices: Supported 00:21:13.506 Firmware Activation Notices: Not Supported 00:21:13.506 ANA Change Notices: Not Supported 00:21:13.506 PLE Aggregate Log Change Notices: Not Supported 00:21:13.506 LBA Status Info Alert Notices: Not Supported 00:21:13.506 EGE Aggregate Log Change Notices: Not Supported 00:21:13.506 Normal NVM Subsystem Shutdown event: Not Supported 00:21:13.506 Zone Descriptor Change Notices: Not Supported 00:21:13.506 Discovery Log Change Notices: Not Supported 00:21:13.506 Controller Attributes 00:21:13.506 128-bit Host Identifier: Supported 00:21:13.506 Non-Operational Permissive Mode: Not Supported 00:21:13.506 NVM Sets: Not Supported 00:21:13.506 Read Recovery Levels: Not Supported 00:21:13.506 Endurance Groups: Not Supported 00:21:13.506 Predictable Latency Mode: Not Supported 00:21:13.506 Traffic Based Keep ALive: Not Supported 00:21:13.506 Namespace Granularity: Not Supported 00:21:13.506 SQ Associations: Not Supported 00:21:13.506 UUID List: Not Supported 00:21:13.506 Multi-Domain Subsystem: Not Supported 00:21:13.506 Fixed Capacity Management: Not Supported 00:21:13.506 Variable Capacity Management: Not Supported 00:21:13.506 Delete Endurance Group: Not Supported 00:21:13.506 Delete NVM Set: Not Supported 00:21:13.506 Extended LBA Formats Supported: Not Supported 00:21:13.506 Flexible Data Placement Supported: Not Supported 00:21:13.506 00:21:13.506 Controller Memory Buffer Support 00:21:13.506 ================================ 00:21:13.506 Supported: No 00:21:13.506 00:21:13.506 Persistent Memory Region Support 00:21:13.506 ================================ 00:21:13.506 Supported: No 00:21:13.506 00:21:13.506 Admin Command Set Attributes 00:21:13.506 ============================ 00:21:13.506 Security Send/Receive: Not Supported 00:21:13.506 Format NVM: Not Supported 00:21:13.506 Firmware Activate/Download: Not Supported 00:21:13.506 Namespace Management: Not Supported 00:21:13.506 Device Self-Test: Not Supported 00:21:13.506 Directives: Not Supported 00:21:13.506 NVMe-MI: Not Supported 00:21:13.506 Virtualization Management: Not Supported 00:21:13.506 Doorbell Buffer Config: Not Supported 00:21:13.506 Get LBA Status Capability: Not Supported 00:21:13.506 Command & Feature Lockdown Capability: Not Supported 00:21:13.506 Abort Command Limit: 4 00:21:13.506 Async Event Request Limit: 4 00:21:13.506 Number of Firmware Slots: N/A 00:21:13.506 Firmware Slot 1 Read-Only: N/A 00:21:13.506 Firmware Activation Without Reset: N/A 00:21:13.506 Multiple Update Detection Support: N/A 00:21:13.506 Firmware Update Granularity: No Information Provided 00:21:13.506 Per-Namespace SMART Log: No 00:21:13.506 Asymmetric Namespace Access Log Page: Not Supported 00:21:13.506 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:13.506 Command Effects Log Page: Supported 00:21:13.506 Get Log Page Extended Data: Supported 00:21:13.506 Telemetry Log Pages: Not Supported 00:21:13.506 Persistent Event Log Pages: Not Supported 00:21:13.506 Supported Log Pages Log Page: May Support 00:21:13.506 Commands Supported & Effects Log Page: Not Supported 00:21:13.506 Feature Identifiers & Effects Log Page:May Support 00:21:13.506 NVMe-MI Commands & Effects Log Page: May Support 00:21:13.506 Data Area 4 for Telemetry Log: Not Supported 00:21:13.506 Error Log Page Entries Supported: 128 00:21:13.506 Keep Alive: Supported 00:21:13.506 Keep Alive Granularity: 10000 ms 00:21:13.506 00:21:13.506 NVM Command Set Attributes 00:21:13.506 ========================== 00:21:13.506 Submission Queue Entry Size 00:21:13.506 Max: 64 00:21:13.506 Min: 64 00:21:13.506 Completion Queue Entry Size 00:21:13.506 Max: 16 00:21:13.506 Min: 16 00:21:13.506 Number of Namespaces: 32 00:21:13.506 Compare Command: Supported 00:21:13.506 Write Uncorrectable Command: Not Supported 00:21:13.506 Dataset Management Command: Supported 00:21:13.506 Write Zeroes Command: Supported 00:21:13.506 Set Features Save Field: Not Supported 00:21:13.506 Reservations: Supported 00:21:13.506 Timestamp: Not Supported 00:21:13.506 Copy: Supported 00:21:13.506 Volatile Write Cache: Present 00:21:13.506 Atomic Write Unit (Normal): 1 00:21:13.506 Atomic Write Unit (PFail): 1 00:21:13.506 Atomic Compare & Write Unit: 1 00:21:13.506 Fused Compare & Write: Supported 00:21:13.506 Scatter-Gather List 00:21:13.506 SGL Command Set: Supported 00:21:13.506 SGL Keyed: Supported 00:21:13.506 SGL Bit Bucket Descriptor: Not Supported 00:21:13.506 SGL Metadata Pointer: Not Supported 00:21:13.506 Oversized SGL: Not Supported 00:21:13.506 SGL Metadata Address: Not Supported 00:21:13.506 SGL Offset: Supported 00:21:13.506 Transport SGL Data Block: Not Supported 00:21:13.506 Replay Protected Memory Block: Not Supported 00:21:13.506 00:21:13.506 Firmware Slot Information 00:21:13.506 ========================= 00:21:13.506 Active slot: 1 00:21:13.506 Slot 1 Firmware Revision: 25.01 00:21:13.506 00:21:13.506 00:21:13.506 Commands Supported and Effects 00:21:13.506 ============================== 00:21:13.506 Admin Commands 00:21:13.506 -------------- 00:21:13.506 Get Log Page (02h): Supported 00:21:13.506 Identify (06h): Supported 00:21:13.506 Abort (08h): Supported 00:21:13.506 Set Features (09h): Supported 00:21:13.506 Get Features (0Ah): Supported 00:21:13.506 Asynchronous Event Request (0Ch): Supported 00:21:13.506 Keep Alive (18h): Supported 00:21:13.506 I/O Commands 00:21:13.506 ------------ 00:21:13.506 Flush (00h): Supported LBA-Change 00:21:13.506 Write (01h): Supported LBA-Change 00:21:13.506 Read (02h): Supported 00:21:13.506 Compare (05h): Supported 00:21:13.507 Write Zeroes (08h): Supported LBA-Change 00:21:13.507 Dataset Management (09h): Supported LBA-Change 00:21:13.507 Copy (19h): Supported LBA-Change 00:21:13.507 00:21:13.507 Error Log 00:21:13.507 ========= 00:21:13.507 00:21:13.507 Arbitration 00:21:13.507 =========== 00:21:13.507 Arbitration Burst: 1 00:21:13.507 00:21:13.507 Power Management 00:21:13.507 ================ 00:21:13.507 Number of Power States: 1 00:21:13.507 Current Power State: Power State #0 00:21:13.507 Power State #0: 00:21:13.507 Max Power: 0.00 W 00:21:13.507 Non-Operational State: Operational 00:21:13.507 Entry Latency: Not Reported 00:21:13.507 Exit Latency: Not Reported 00:21:13.507 Relative Read Throughput: 0 00:21:13.507 Relative Read Latency: 0 00:21:13.507 Relative Write Throughput: 0 00:21:13.507 Relative Write Latency: 0 00:21:13.507 Idle Power: Not Reported 00:21:13.507 Active Power: Not Reported 00:21:13.507 Non-Operational Permissive Mode: Not Supported 00:21:13.507 00:21:13.507 Health Information 00:21:13.507 ================== 00:21:13.507 Critical Warnings: 00:21:13.507 Available Spare Space: OK 00:21:13.507 Temperature: OK 00:21:13.507 Device Reliability: OK 00:21:13.507 Read Only: No 00:21:13.507 Volatile Memory Backup: OK 00:21:13.507 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:13.507 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:13.507 Available Spare: 0% 00:21:13.507 Available Spare Threshold: 0% 00:21:13.507 Life Percentage Used:[2024-11-20 07:23:16.729532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.729544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.729556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.729578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669b80, cid 7, qid 0 00:21:13.507 [2024-11-20 07:23:16.729673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-20 07:23:16.729685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-20 07:23:16.729692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.729699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669b80) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.729745] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:13.507 [2024-11-20 07:23:16.729763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669100) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.729774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.507 [2024-11-20 07:23:16.729783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669280) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.729790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.507 [2024-11-20 07:23:16.729798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669400) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.729805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.507 [2024-11-20 07:23:16.729813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.729821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.507 [2024-11-20 07:23:16.729833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.729841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.729847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.729858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.729879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.507 [2024-11-20 07:23:16.729955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-20 07:23:16.729969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-20 07:23:16.729975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.729982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.729993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.730017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.730047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.507 [2024-11-20 07:23:16.730134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-20 07:23:16.730148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-20 07:23:16.730155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.730169] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:13.507 [2024-11-20 07:23:16.730177] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:13.507 [2024-11-20 07:23:16.730193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.730218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.730238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.507 [2024-11-20 07:23:16.730316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-20 07:23:16.730330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-20 07:23:16.730337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.730359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.730386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.730406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.507 [2024-11-20 07:23:16.730486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-20 07:23:16.730500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-20 07:23:16.730506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.730529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.730554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.730575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.507 [2024-11-20 07:23:16.730650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-20 07:23:16.730663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-20 07:23:16.730670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.507 [2024-11-20 07:23:16.730692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-20 07:23:16.730708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.507 [2024-11-20 07:23:16.730722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-20 07:23:16.730743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.730817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.730831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.730837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.730844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.730860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.730869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.730876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.730886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.730906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.730983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.730996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.731025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.731051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.731071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.731143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.731155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.731184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.731209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.731229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.731312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.731326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.731355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.731381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.731408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.731485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.731499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.731528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.731554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.731574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.731646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.731657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.731687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.731712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.731732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.731804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.731815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.731844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.731869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.731889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.731964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.731976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.731983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.731989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.732005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.732013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.732020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.732030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.732050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.732121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.732133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.732140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.732146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.732162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.732171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.732178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.732188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.732208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.732279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.732292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.732298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.736317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.736340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.736350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.736357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1607690) 00:21:13.508 [2024-11-20 07:23:16.736368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-20 07:23:16.736390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1669580, cid 3, qid 0 00:21:13.508 [2024-11-20 07:23:16.736467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-20 07:23:16.736479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-20 07:23:16.736486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-20 07:23:16.736492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1669580) on tqpair=0x1607690 00:21:13.508 [2024-11-20 07:23:16.736505] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:13.508 0% 00:21:13.508 Data Units Read: 0 00:21:13.508 Data Units Written: 0 00:21:13.508 Host Read Commands: 0 00:21:13.508 Host Write Commands: 0 00:21:13.508 Controller Busy Time: 0 minutes 00:21:13.508 Power Cycles: 0 00:21:13.508 Power On Hours: 0 hours 00:21:13.508 Unsafe Shutdowns: 0 00:21:13.508 Unrecoverable Media Errors: 0 00:21:13.508 Lifetime Error Log Entries: 0 00:21:13.508 Warning Temperature Time: 0 minutes 00:21:13.508 Critical Temperature Time: 0 minutes 00:21:13.508 00:21:13.508 Number of Queues 00:21:13.508 ================ 00:21:13.508 Number of I/O Submission Queues: 127 00:21:13.508 Number of I/O Completion Queues: 127 00:21:13.508 00:21:13.508 Active Namespaces 00:21:13.508 ================= 00:21:13.508 Namespace ID:1 00:21:13.508 Error Recovery Timeout: Unlimited 00:21:13.508 Command Set Identifier: NVM (00h) 00:21:13.508 Deallocate: Supported 00:21:13.509 Deallocated/Unwritten Error: Not Supported 00:21:13.509 Deallocated Read Value: Unknown 00:21:13.509 Deallocate in Write Zeroes: Not Supported 00:21:13.509 Deallocated Guard Field: 0xFFFF 00:21:13.509 Flush: Supported 00:21:13.509 Reservation: Supported 00:21:13.509 Namespace Sharing Capabilities: Multiple Controllers 00:21:13.509 Size (in LBAs): 131072 (0GiB) 00:21:13.509 Capacity (in LBAs): 131072 (0GiB) 00:21:13.509 Utilization (in LBAs): 131072 (0GiB) 00:21:13.509 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:13.509 EUI64: ABCDEF0123456789 00:21:13.509 UUID: 23cedb15-5eb6-4b1e-aefb-d1b8cd35e385 00:21:13.509 Thin Provisioning: Not Supported 00:21:13.509 Per-NS Atomic Units: Yes 00:21:13.509 Atomic Boundary Size (Normal): 0 00:21:13.509 Atomic Boundary Size (PFail): 0 00:21:13.509 Atomic Boundary Offset: 0 00:21:13.509 Maximum Single Source Range Length: 65535 00:21:13.509 Maximum Copy Length: 65535 00:21:13.509 Maximum Source Range Count: 1 00:21:13.509 NGUID/EUI64 Never Reused: No 00:21:13.509 Namespace Write Protected: No 00:21:13.509 Number of LBA Formats: 1 00:21:13.509 Current LBA Format: LBA Format #00 00:21:13.509 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:13.509 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.509 rmmod nvme_tcp 00:21:13.509 rmmod nvme_fabrics 00:21:13.509 rmmod nvme_keyring 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2561625 ']' 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2561625 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2561625 ']' 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2561625 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2561625 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2561625' 00:21:13.509 killing process with pid 2561625 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2561625 00:21:13.509 07:23:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2561625 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.769 07:23:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.311 00:21:16.311 real 0m5.507s 00:21:16.311 user 0m4.575s 00:21:16.311 sys 0m1.984s 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.311 ************************************ 00:21:16.311 END TEST nvmf_identify 00:21:16.311 ************************************ 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.311 ************************************ 00:21:16.311 START TEST nvmf_perf 00:21:16.311 ************************************ 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:16.311 * Looking for test storage... 00:21:16.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.311 --rc genhtml_branch_coverage=1 00:21:16.311 --rc genhtml_function_coverage=1 00:21:16.311 --rc genhtml_legend=1 00:21:16.311 --rc geninfo_all_blocks=1 00:21:16.311 --rc geninfo_unexecuted_blocks=1 00:21:16.311 00:21:16.311 ' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.311 --rc genhtml_branch_coverage=1 00:21:16.311 --rc genhtml_function_coverage=1 00:21:16.311 --rc genhtml_legend=1 00:21:16.311 --rc geninfo_all_blocks=1 00:21:16.311 --rc geninfo_unexecuted_blocks=1 00:21:16.311 00:21:16.311 ' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.311 --rc genhtml_branch_coverage=1 00:21:16.311 --rc genhtml_function_coverage=1 00:21:16.311 --rc genhtml_legend=1 00:21:16.311 --rc geninfo_all_blocks=1 00:21:16.311 --rc geninfo_unexecuted_blocks=1 00:21:16.311 00:21:16.311 ' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.311 --rc genhtml_branch_coverage=1 00:21:16.311 --rc genhtml_function_coverage=1 00:21:16.311 --rc genhtml_legend=1 00:21:16.311 --rc geninfo_all_blocks=1 00:21:16.311 --rc geninfo_unexecuted_blocks=1 00:21:16.311 00:21:16.311 ' 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:16.311 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.312 07:23:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:18.218 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:18.218 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:18.218 Found net devices under 0000:09:00.0: cvl_0_0 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:18.218 Found net devices under 0000:09:00.1: cvl_0_1 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.218 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:21:18.219 00:21:18.219 --- 10.0.0.2 ping statistics --- 00:21:18.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.219 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:18.219 00:21:18.219 --- 10.0.0.1 ping statistics --- 00:21:18.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.219 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2563714 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2563714 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2563714 ']' 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:18.219 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.477 [2024-11-20 07:23:21.668814] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:18.477 [2024-11-20 07:23:21.668889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.477 [2024-11-20 07:23:21.742673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.477 [2024-11-20 07:23:21.805100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.477 [2024-11-20 07:23:21.805154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.477 [2024-11-20 07:23:21.805167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.477 [2024-11-20 07:23:21.805179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.477 [2024-11-20 07:23:21.805188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.477 [2024-11-20 07:23:21.806817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.477 [2024-11-20 07:23:21.806881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.477 [2024-11-20 07:23:21.806950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.477 [2024-11-20 07:23:21.806953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:18.735 07:23:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:22.013 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:22.013 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:22.013 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:21:22.013 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:22.579 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:22.579 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:21:22.579 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:22.579 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:22.579 07:23:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.579 [2024-11-20 07:23:25.983628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.579 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.145 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:23.145 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.145 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:23.145 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:23.402 07:23:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.660 [2024-11-20 07:23:27.075675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.918 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.175 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:21:24.175 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:24.175 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:24.175 07:23:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:25.546 Initializing NVMe Controllers 00:21:25.546 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:21:25.546 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:21:25.546 Initialization complete. Launching workers. 00:21:25.546 ======================================================== 00:21:25.546 Latency(us) 00:21:25.546 Device Information : IOPS MiB/s Average min max 00:21:25.546 PCIE (0000:0b:00.0) NSID 1 from core 0: 85315.36 333.26 374.54 33.01 5438.07 00:21:25.546 ======================================================== 00:21:25.546 Total : 85315.36 333.26 374.54 33.01 5438.07 00:21:25.546 00:21:25.546 07:23:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:26.918 Initializing NVMe Controllers 00:21:26.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:26.919 Initialization complete. Launching workers. 00:21:26.919 ======================================================== 00:21:26.919 Latency(us) 00:21:26.919 Device Information : IOPS MiB/s Average min max 00:21:26.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11462.21 140.52 45699.30 00:21:26.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17937.36 7945.75 47900.41 00:21:26.919 ======================================================== 00:21:26.919 Total : 146.00 0.57 13945.83 140.52 47900.41 00:21:26.919 00:21:26.919 07:23:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:28.292 Initializing NVMe Controllers 00:21:28.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:28.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:28.292 Initialization complete. Launching workers. 00:21:28.292 ======================================================== 00:21:28.292 Latency(us) 00:21:28.292 Device Information : IOPS MiB/s Average min max 00:21:28.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8565.99 33.46 3736.48 723.32 7583.11 00:21:28.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3898.00 15.23 8251.83 6821.93 16312.29 00:21:28.292 ======================================================== 00:21:28.292 Total : 12463.99 48.69 5148.61 723.32 16312.29 00:21:28.292 00:21:28.292 07:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:28.292 07:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:28.292 07:23:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.821 Initializing NVMe Controllers 00:21:30.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.821 Controller IO queue size 128, less than required. 00:21:30.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.821 Controller IO queue size 128, less than required. 00:21:30.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:30.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:30.821 Initialization complete. Launching workers. 00:21:30.821 ======================================================== 00:21:30.821 Latency(us) 00:21:30.821 Device Information : IOPS MiB/s Average min max 00:21:30.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1710.50 427.62 75733.91 54290.29 127630.45 00:21:30.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.50 148.62 224718.83 118938.95 325754.42 00:21:30.821 ======================================================== 00:21:30.821 Total : 2305.00 576.25 114159.74 54290.29 325754.42 00:21:30.821 00:21:30.821 07:23:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:30.821 No valid NVMe controllers or AIO or URING devices found 00:21:30.821 Initializing NVMe Controllers 00:21:30.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.821 Controller IO queue size 128, less than required. 00:21:30.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.821 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:30.821 Controller IO queue size 128, less than required. 00:21:30.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.821 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:30.821 WARNING: Some requested NVMe devices were skipped 00:21:30.821 07:23:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:34.103 Initializing NVMe Controllers 00:21:34.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.103 Controller IO queue size 128, less than required. 00:21:34.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:34.103 Controller IO queue size 128, less than required. 00:21:34.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:34.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:34.103 Initialization complete. Launching workers. 00:21:34.103 00:21:34.103 ==================== 00:21:34.103 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:34.103 TCP transport: 00:21:34.103 polls: 8882 00:21:34.103 idle_polls: 5761 00:21:34.103 sock_completions: 3121 00:21:34.103 nvme_completions: 6053 00:21:34.103 submitted_requests: 9138 00:21:34.103 queued_requests: 1 00:21:34.103 00:21:34.103 ==================== 00:21:34.103 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:34.103 TCP transport: 00:21:34.103 polls: 11963 00:21:34.103 idle_polls: 8496 00:21:34.103 sock_completions: 3467 00:21:34.103 nvme_completions: 6511 00:21:34.103 submitted_requests: 9772 00:21:34.103 queued_requests: 1 00:21:34.103 ======================================================== 00:21:34.103 Latency(us) 00:21:34.103 Device Information : IOPS MiB/s Average min max 00:21:34.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1511.12 377.78 85733.28 45400.59 142030.69 00:21:34.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1625.48 406.37 79966.26 47209.26 142808.24 00:21:34.103 ======================================================== 00:21:34.103 Total : 3136.61 784.15 82744.64 45400.59 142808.24 00:21:34.103 00:21:34.103 07:23:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:34.103 07:23:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.103 rmmod nvme_tcp 00:21:34.103 rmmod nvme_fabrics 00:21:34.103 rmmod nvme_keyring 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2563714 ']' 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2563714 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2563714 ']' 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2563714 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2563714 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2563714' 00:21:34.103 killing process with pid 2563714 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2563714 00:21:34.103 07:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2563714 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.478 07:23:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:38.062 00:21:38.062 real 0m21.734s 00:21:38.062 user 1m6.988s 00:21:38.062 sys 0m5.768s 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:38.062 ************************************ 00:21:38.062 END TEST nvmf_perf 00:21:38.062 ************************************ 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.062 ************************************ 00:21:38.062 START TEST nvmf_fio_host 00:21:38.062 ************************************ 00:21:38.062 07:23:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:38.062 * Looking for test storage... 00:21:38.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.062 --rc genhtml_branch_coverage=1 00:21:38.062 --rc genhtml_function_coverage=1 00:21:38.062 --rc genhtml_legend=1 00:21:38.062 --rc geninfo_all_blocks=1 00:21:38.062 --rc geninfo_unexecuted_blocks=1 00:21:38.062 00:21:38.062 ' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.062 --rc genhtml_branch_coverage=1 00:21:38.062 --rc genhtml_function_coverage=1 00:21:38.062 --rc genhtml_legend=1 00:21:38.062 --rc geninfo_all_blocks=1 00:21:38.062 --rc geninfo_unexecuted_blocks=1 00:21:38.062 00:21:38.062 ' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.062 --rc genhtml_branch_coverage=1 00:21:38.062 --rc genhtml_function_coverage=1 00:21:38.062 --rc genhtml_legend=1 00:21:38.062 --rc geninfo_all_blocks=1 00:21:38.062 --rc geninfo_unexecuted_blocks=1 00:21:38.062 00:21:38.062 ' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:38.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.062 --rc genhtml_branch_coverage=1 00:21:38.062 --rc genhtml_function_coverage=1 00:21:38.062 --rc genhtml_legend=1 00:21:38.062 --rc geninfo_all_blocks=1 00:21:38.062 --rc geninfo_unexecuted_blocks=1 00:21:38.062 00:21:38.062 ' 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.062 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.063 07:23:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:39.992 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.992 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:39.993 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:39.993 Found net devices under 0000:09:00.0: cvl_0_0 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:39.993 Found net devices under 0000:09:00.1: cvl_0_1 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:21:39.993 00:21:39.993 --- 10.0.0.2 ping statistics --- 00:21:39.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.993 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:39.993 00:21:39.993 --- 10.0.0.1 ping statistics --- 00:21:39.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.993 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2567695 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2567695 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2567695 ']' 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:39.993 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.252 [2024-11-20 07:23:43.425379] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:40.252 [2024-11-20 07:23:43.425455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.252 [2024-11-20 07:23:43.501854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.252 [2024-11-20 07:23:43.559694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.252 [2024-11-20 07:23:43.559737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.252 [2024-11-20 07:23:43.559762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.252 [2024-11-20 07:23:43.559772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.252 [2024-11-20 07:23:43.559781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.252 [2024-11-20 07:23:43.561366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.252 [2024-11-20 07:23:43.561416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.252 [2024-11-20 07:23:43.561419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.252 [2024-11-20 07:23:43.561394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.510 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:40.510 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:21:40.510 07:23:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:40.767 [2024-11-20 07:23:43.981822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.767 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:40.768 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.768 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.768 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:41.026 Malloc1 00:21:41.026 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.284 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:41.541 07:23:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.799 [2024-11-20 07:23:45.198166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.799 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:42.057 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.058 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.058 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:21:42.058 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:42.316 07:23:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.316 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:42.316 fio-3.35 00:21:42.316 Starting 1 thread 00:21:44.850 00:21:44.850 test: (groupid=0, jobs=1): err= 0: pid=2568059: Wed Nov 20 07:23:48 2024 00:21:44.850 read: IOPS=8875, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec) 00:21:44.850 slat (nsec): min=1897, max=181437, avg=2485.81, stdev=2005.90 00:21:44.850 clat (usec): min=2446, max=13698, avg=7865.68, stdev=640.15 00:21:44.850 lat (usec): min=2473, max=13700, avg=7868.17, stdev=640.03 00:21:44.850 clat percentiles (usec): 00:21:44.850 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7373], 00:21:44.850 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:44.850 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:21:44.850 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[12125], 00:21:44.850 | 99.99th=[13698] 00:21:44.850 bw ( KiB/s): min=34744, max=36008, per=99.93%, avg=35478.00, stdev=532.92, samples=4 00:21:44.850 iops : min= 8686, max= 9002, avg=8869.50, stdev=133.23, samples=4 00:21:44.850 write: IOPS=8886, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec); 0 zone resets 00:21:44.850 slat (usec): min=2, max=134, avg= 2.62, stdev= 1.47 00:21:44.850 clat (usec): min=1434, max=12082, avg=6511.82, stdev=536.88 00:21:44.850 lat (usec): min=1443, max=12084, avg=6514.44, stdev=536.85 00:21:44.850 clat percentiles (usec): 00:21:44.850 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6128], 00:21:44.850 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:21:44.850 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:21:44.850 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[10421], 99.95th=[11338], 00:21:44.850 | 99.99th=[11994] 00:21:44.850 bw ( KiB/s): min=35264, max=35768, per=100.00%, avg=35568.00, stdev=216.35, samples=4 00:21:44.850 iops : min= 8816, max= 8942, avg=8892.00, stdev=54.09, samples=4 00:21:44.850 lat (msec) : 2=0.02%, 4=0.12%, 10=99.69%, 20=0.17% 00:21:44.850 cpu : usr=65.85%, sys=32.55%, ctx=112, majf=0, minf=32 00:21:44.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:44.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.850 issued rwts: total=17814,17836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.850 00:21:44.850 Run status group 0 (all jobs): 00:21:44.850 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:21:44.850 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2007-2007msec 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:44.850 07:23:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.108 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:45.108 fio-3.35 00:21:45.108 Starting 1 thread 00:21:47.634 00:21:47.634 test: (groupid=0, jobs=1): err= 0: pid=2568512: Wed Nov 20 07:23:50 2024 00:21:47.634 read: IOPS=8396, BW=131MiB/s (138MB/s)(263MiB/2006msec) 00:21:47.634 slat (nsec): min=2765, max=94035, avg=3643.47, stdev=1578.52 00:21:47.634 clat (usec): min=1620, max=16771, avg=8866.37, stdev=2169.53 00:21:47.634 lat (usec): min=1624, max=16775, avg=8870.01, stdev=2169.55 00:21:47.634 clat percentiles (usec): 00:21:47.634 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6980], 00:21:47.634 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9241], 00:21:47.634 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11469], 95.00th=[12518], 00:21:47.634 | 99.00th=[15139], 99.50th=[15926], 99.90th=[16581], 99.95th=[16581], 00:21:47.634 | 99.99th=[16712] 00:21:47.634 bw ( KiB/s): min=61888, max=73568, per=50.97%, avg=68480.00, stdev=6002.08, samples=4 00:21:47.634 iops : min= 3868, max= 4598, avg=4280.00, stdev=375.13, samples=4 00:21:47.634 write: IOPS=4784, BW=74.8MiB/s (78.4MB/s)(140MiB/1870msec); 0 zone resets 00:21:47.634 slat (usec): min=30, max=194, avg=33.88, stdev= 5.89 00:21:47.634 clat (usec): min=5068, max=18327, avg=11394.50, stdev=2101.79 00:21:47.635 lat (usec): min=5100, max=18360, avg=11428.38, stdev=2102.09 00:21:47.635 clat percentiles (usec): 00:21:47.635 | 1.00th=[ 7373], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:21:47.635 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:21:47.635 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14484], 95.00th=[15139], 00:21:47.635 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:21:47.635 | 99.99th=[18220] 00:21:47.635 bw ( KiB/s): min=63872, max=76928, per=92.88%, avg=71104.00, stdev=6759.00, samples=4 00:21:47.635 iops : min= 3992, max= 4808, avg=4444.00, stdev=422.44, samples=4 00:21:47.635 lat (msec) : 2=0.01%, 4=0.17%, 10=56.16%, 20=43.66% 00:21:47.635 cpu : usr=77.62%, sys=21.14%, ctx=41, majf=0, minf=48 00:21:47.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:47.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:47.635 issued rwts: total=16843,8947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:47.635 00:21:47.635 Run status group 0 (all jobs): 00:21:47.635 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=263MiB (276MB), run=2006-2006msec 00:21:47.635 WRITE: bw=74.8MiB/s (78.4MB/s), 74.8MiB/s-74.8MiB/s (78.4MB/s-78.4MB/s), io=140MiB (147MB), run=1870-1870msec 00:21:47.635 07:23:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.635 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.893 rmmod nvme_tcp 00:21:47.893 rmmod nvme_fabrics 00:21:47.893 rmmod nvme_keyring 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2567695 ']' 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2567695 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2567695 ']' 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2567695 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2567695 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2567695' 00:21:47.893 killing process with pid 2567695 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2567695 00:21:47.893 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2567695 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.152 07:23:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.061 07:23:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.061 00:21:50.061 real 0m12.493s 00:21:50.061 user 0m37.466s 00:21:50.061 sys 0m4.001s 00:21:50.061 07:23:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:50.061 07:23:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.061 ************************************ 00:21:50.061 END TEST nvmf_fio_host 00:21:50.061 ************************************ 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.319 ************************************ 00:21:50.319 START TEST nvmf_failover 00:21:50.319 ************************************ 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:50.319 * Looking for test storage... 00:21:50.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.319 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.320 --rc genhtml_branch_coverage=1 00:21:50.320 --rc genhtml_function_coverage=1 00:21:50.320 --rc genhtml_legend=1 00:21:50.320 --rc geninfo_all_blocks=1 00:21:50.320 --rc geninfo_unexecuted_blocks=1 00:21:50.320 00:21:50.320 ' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.320 --rc genhtml_branch_coverage=1 00:21:50.320 --rc genhtml_function_coverage=1 00:21:50.320 --rc genhtml_legend=1 00:21:50.320 --rc geninfo_all_blocks=1 00:21:50.320 --rc geninfo_unexecuted_blocks=1 00:21:50.320 00:21:50.320 ' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.320 --rc genhtml_branch_coverage=1 00:21:50.320 --rc genhtml_function_coverage=1 00:21:50.320 --rc genhtml_legend=1 00:21:50.320 --rc geninfo_all_blocks=1 00:21:50.320 --rc geninfo_unexecuted_blocks=1 00:21:50.320 00:21:50.320 ' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:50.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.320 --rc genhtml_branch_coverage=1 00:21:50.320 --rc genhtml_function_coverage=1 00:21:50.320 --rc genhtml_legend=1 00:21:50.320 --rc geninfo_all_blocks=1 00:21:50.320 --rc geninfo_unexecuted_blocks=1 00:21:50.320 00:21:50.320 ' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.320 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.321 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.321 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.321 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.321 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.321 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.321 07:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.850 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:52.851 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:52.851 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:52.851 Found net devices under 0000:09:00.0: cvl_0_0 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:52.851 Found net devices under 0000:09:00.1: cvl_0_1 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:21:52.851 00:21:52.851 --- 10.0.0.2 ping statistics --- 00:21:52.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.851 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:21:52.851 00:21:52.851 --- 10.0.0.1 ping statistics --- 00:21:52.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.851 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2570719 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2570719 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2570719 ']' 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.851 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:52.852 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.852 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:52.852 07:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:52.852 [2024-11-20 07:23:56.018382] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:21:52.852 [2024-11-20 07:23:56.018461] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.852 [2024-11-20 07:23:56.093708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.852 [2024-11-20 07:23:56.153534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.852 [2024-11-20 07:23:56.153604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.852 [2024-11-20 07:23:56.153618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.852 [2024-11-20 07:23:56.153629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.852 [2024-11-20 07:23:56.153638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.852 [2024-11-20 07:23:56.155103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.852 [2024-11-20 07:23:56.155167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.852 [2024-11-20 07:23:56.155171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.109 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:53.366 [2024-11-20 07:23:56.618822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.366 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:53.623 Malloc0 00:21:53.623 07:23:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.881 07:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.138 07:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.702 [2024-11-20 07:23:57.832470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.702 07:23:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.960 [2024-11-20 07:23:58.157463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.960 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.217 [2024-11-20 07:23:58.430346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2571007 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2571007 /var/tmp/bdevperf.sock 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2571007 ']' 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:55.217 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:55.475 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:55.475 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:21:55.475 07:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:55.733 NVMe0n1 00:21:55.733 07:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:56.297 00:21:56.297 07:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2571143 00:21:56.297 07:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.297 07:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:57.230 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.488 [2024-11-20 07:24:00.798272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.488 [2024-11-20 07:24:00.798870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.489 [2024-11-20 07:24:00.798881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.489 [2024-11-20 07:24:00.798891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.489 [2024-11-20 07:24:00.798903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.489 [2024-11-20 07:24:00.798914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481c70 is same with the state(6) to be set 00:21:57.489 07:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:00.765 07:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:01.023 00:22:01.023 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.281 [2024-11-20 07:24:04.601368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.281 [2024-11-20 07:24:04.601754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482ae0 is same with the state(6) to be set 00:22:01.282 07:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:04.560 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.560 [2024-11-20 07:24:07.919647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.560 07:24:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:05.937 07:24:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:05.937 [2024-11-20 07:24:09.214346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 [2024-11-20 07:24:09.214793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1483b60 is same with the state(6) to be set 00:22:05.938 07:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2571143 00:22:12.530 { 00:22:12.530 "results": [ 00:22:12.530 { 00:22:12.530 "job": "NVMe0n1", 00:22:12.530 "core_mask": "0x1", 00:22:12.530 "workload": "verify", 00:22:12.530 "status": "finished", 00:22:12.530 "verify_range": { 00:22:12.530 "start": 0, 00:22:12.530 "length": 16384 00:22:12.530 }, 00:22:12.530 "queue_depth": 128, 00:22:12.530 "io_size": 4096, 00:22:12.530 "runtime": 15.052635, 00:22:12.530 "iops": 8230.386241345785, 00:22:12.530 "mibps": 32.14994625525697, 00:22:12.530 "io_failed": 14045, 00:22:12.530 "io_timeout": 0, 00:22:12.530 "avg_latency_us": 13906.97645098112, 00:22:12.530 "min_latency_us": 546.1333333333333, 00:22:12.530 "max_latency_us": 45632.474074074074 00:22:12.530 } 00:22:12.530 ], 00:22:12.530 "core_count": 1 00:22:12.530 } 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2571007 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2571007 ']' 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2571007 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2571007 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2571007' 00:22:12.530 killing process with pid 2571007 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2571007 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2571007 00:22:12.530 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:12.530 [2024-11-20 07:23:58.499123] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:22:12.530 [2024-11-20 07:23:58.499224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571007 ] 00:22:12.530 [2024-11-20 07:23:58.567225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.530 [2024-11-20 07:23:58.626716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.530 Running I/O for 15 seconds... 00:22:12.530 8248.00 IOPS, 32.22 MiB/s [2024-11-20T06:24:15.963Z] [2024-11-20 07:24:00.799666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.799984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.799997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.800035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.530 [2024-11-20 07:24:00.800063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.530 [2024-11-20 07:24:00.800091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.530 [2024-11-20 07:24:00.800119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.530 [2024-11-20 07:24:00.800147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.530 [2024-11-20 07:24:00.800183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.530 [2024-11-20 07:24:00.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.530 [2024-11-20 07:24:00.800211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.531 [2024-11-20 07:24:00.800240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.531 [2024-11-20 07:24:00.800272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.800980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.800993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.531 [2024-11-20 07:24:00.801049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.531 [2024-11-20 07:24:00.801076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.531 [2024-11-20 07:24:00.801423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.531 [2024-11-20 07:24:00.801437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.801983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.801997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.532 [2024-11-20 07:24:00.802583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.532 [2024-11-20 07:24:00.802628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.532 [2024-11-20 07:24:00.802661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.532 [2024-11-20 07:24:00.802681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.532 [2024-11-20 07:24:00.802696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.802974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.802988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.533 [2024-11-20 07:24:00.803598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.533 [2024-11-20 07:24:00.803663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:22:12.533 [2024-11-20 07:24:00.803676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.533 [2024-11-20 07:24:00.803712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.533 [2024-11-20 07:24:00.803723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:22:12.533 [2024-11-20 07:24:00.803736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803808] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:12.533 [2024-11-20 07:24:00.803860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.533 [2024-11-20 07:24:00.803880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.533 [2024-11-20 07:24:00.803910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.533 [2024-11-20 07:24:00.803937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.533 [2024-11-20 07:24:00.803965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.533 [2024-11-20 07:24:00.803979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:12.533 [2024-11-20 07:24:00.807420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:12.533 [2024-11-20 07:24:00.807459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2185560 (9): Bad file descriptor 00:22:12.533 [2024-11-20 07:24:00.965936] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:12.533 7665.50 IOPS, 29.94 MiB/s [2024-11-20T06:24:15.966Z] 7947.00 IOPS, 31.04 MiB/s [2024-11-20T06:24:15.966Z] 8124.00 IOPS, 31.73 MiB/s [2024-11-20T06:24:15.966Z] [2024-11-20 07:24:04.602027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.533 [2024-11-20 07:24:04.602070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.602975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.602989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.603017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.603031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.603046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.603059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.603074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.603088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.603103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.603117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.603131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.534 [2024-11-20 07:24:04.603144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.534 [2024-11-20 07:24:04.603159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.603744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.603772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.535 [2024-11-20 07:24:04.603964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.603979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.603992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.535 [2024-11-20 07:24:04.604338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.535 [2024-11-20 07:24:04.604353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.604975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.604990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.536 [2024-11-20 07:24:04.605611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.536 [2024-11-20 07:24:04.605645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.605985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.537 [2024-11-20 07:24:04.605999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.606040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.537 [2024-11-20 07:24:04.606061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.537 [2024-11-20 07:24:04.606073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:22:12.537 [2024-11-20 07:24:04.606086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.606152] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:12.537 [2024-11-20 07:24:04.606204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:04.606223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.606239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:04.606253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.606266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:04.606286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.606310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:04.606326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:04.606340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:12.537 [2024-11-20 07:24:04.609678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:12.537 [2024-11-20 07:24:04.609719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2185560 (9): Bad file descriptor 00:22:12.537 [2024-11-20 07:24:04.636995] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:12.537 8123.20 IOPS, 31.73 MiB/s [2024-11-20T06:24:15.970Z] 8209.17 IOPS, 32.07 MiB/s [2024-11-20T06:24:15.970Z] 8264.29 IOPS, 32.28 MiB/s [2024-11-20T06:24:15.970Z] 8305.00 IOPS, 32.44 MiB/s [2024-11-20T06:24:15.970Z] 8350.67 IOPS, 32.62 MiB/s [2024-11-20T06:24:15.970Z] [2024-11-20 07:24:09.215289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:09.215349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:09.215383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:09.215412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.537 [2024-11-20 07:24:09.215440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2185560 is same with the state(6) to be set 00:22:12.537 [2024-11-20 07:24:09.215545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.537 [2024-11-20 07:24:09.215973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.537 [2024-11-20 07:24:09.215988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.216975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.216990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.217018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.217045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.217073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.217105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.538 [2024-11-20 07:24:09.217160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.538 [2024-11-20 07:24:09.217173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.539 [2024-11-20 07:24:09.217887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.217915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.217943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.217971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.217986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.217999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.539 [2024-11-20 07:24:09.218240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.539 [2024-11-20 07:24:09.218254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.218972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.218986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.540 [2024-11-20 07:24:09.219477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.540 [2024-11-20 07:24:09.219528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.540 [2024-11-20 07:24:09.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46200 len:8 PRP1 0x0 PRP2 0x0 00:22:12.540 [2024-11-20 07:24:09.219554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.540 [2024-11-20 07:24:09.219637] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:12.540 [2024-11-20 07:24:09.219658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:12.540 [2024-11-20 07:24:09.222994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:12.541 [2024-11-20 07:24:09.223035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2185560 (9): Bad file descriptor 00:22:12.541 [2024-11-20 07:24:09.378037] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:12.541 8218.80 IOPS, 32.10 MiB/s [2024-11-20T06:24:15.974Z] 8234.45 IOPS, 32.17 MiB/s [2024-11-20T06:24:15.974Z] 8240.17 IOPS, 32.19 MiB/s [2024-11-20T06:24:15.974Z] 8239.62 IOPS, 32.19 MiB/s [2024-11-20T06:24:15.974Z] 8243.07 IOPS, 32.20 MiB/s [2024-11-20T06:24:15.974Z] 8250.73 IOPS, 32.23 MiB/s 00:22:12.541 Latency(us) 00:22:12.541 [2024-11-20T06:24:15.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.541 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:12.541 Verification LBA range: start 0x0 length 0x4000 00:22:12.541 NVMe0n1 : 15.05 8230.39 32.15 933.06 0.00 13906.98 546.13 45632.47 00:22:12.541 [2024-11-20T06:24:15.974Z] =================================================================================================================== 00:22:12.541 [2024-11-20T06:24:15.974Z] Total : 8230.39 32.15 933.06 0.00 13906.98 546.13 45632.47 00:22:12.541 Received shutdown signal, test time was about 15.000000 seconds 00:22:12.541 00:22:12.541 Latency(us) 00:22:12.541 [2024-11-20T06:24:15.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.541 [2024-11-20T06:24:15.974Z] =================================================================================================================== 00:22:12.541 [2024-11-20T06:24:15.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.541 07:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2573607 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2573607 /var/tmp/bdevperf.sock 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2573607 ']' 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.541 [2024-11-20 07:24:15.519646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:12.541 [2024-11-20 07:24:15.808434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:12.541 07:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:13.129 NVMe0n1 00:22:13.129 07:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:13.386 00:22:13.643 07:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:13.900 00:22:13.900 07:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.900 07:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:14.158 07:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.415 07:24:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:17.689 07:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:17.689 07:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:17.689 07:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2574276 00:22:17.689 07:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.689 07:24:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2574276 00:22:19.061 { 00:22:19.061 "results": [ 00:22:19.061 { 00:22:19.061 "job": "NVMe0n1", 00:22:19.061 "core_mask": "0x1", 00:22:19.061 "workload": "verify", 00:22:19.061 "status": "finished", 00:22:19.061 "verify_range": { 00:22:19.061 "start": 0, 00:22:19.061 "length": 16384 00:22:19.061 }, 00:22:19.061 "queue_depth": 128, 00:22:19.061 "io_size": 4096, 00:22:19.061 "runtime": 1.009806, 00:22:19.061 "iops": 8542.234845108862, 00:22:19.061 "mibps": 33.36810486370649, 00:22:19.061 "io_failed": 0, 00:22:19.061 "io_timeout": 0, 00:22:19.061 "avg_latency_us": 14898.105661608744, 00:22:19.061 "min_latency_us": 2402.9866666666667, 00:22:19.061 "max_latency_us": 15825.730370370371 00:22:19.061 } 00:22:19.061 ], 00:22:19.061 "core_count": 1 00:22:19.061 } 00:22:19.061 07:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:19.061 [2024-11-20 07:24:15.045995] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:22:19.061 [2024-11-20 07:24:15.046106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573607 ] 00:22:19.061 [2024-11-20 07:24:15.114829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.061 [2024-11-20 07:24:15.171701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.061 [2024-11-20 07:24:17.701787] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:19.061 [2024-11-20 07:24:17.701881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.061 [2024-11-20 07:24:17.701906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.061 [2024-11-20 07:24:17.701922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.061 [2024-11-20 07:24:17.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.061 [2024-11-20 07:24:17.701964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.061 [2024-11-20 07:24:17.701978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.061 [2024-11-20 07:24:17.701994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.061 [2024-11-20 07:24:17.702007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.061 [2024-11-20 07:24:17.702021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:19.061 [2024-11-20 07:24:17.702065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:19.061 [2024-11-20 07:24:17.702096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb89560 (9): Bad file descriptor 00:22:19.061 [2024-11-20 07:24:17.713061] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:19.061 Running I/O for 1 seconds... 00:22:19.061 8490.00 IOPS, 33.16 MiB/s 00:22:19.061 Latency(us) 00:22:19.061 [2024-11-20T06:24:22.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.061 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.061 Verification LBA range: start 0x0 length 0x4000 00:22:19.061 NVMe0n1 : 1.01 8542.23 33.37 0.00 0.00 14898.11 2402.99 15825.73 00:22:19.061 [2024-11-20T06:24:22.494Z] =================================================================================================================== 00:22:19.061 [2024-11-20T06:24:22.494Z] Total : 8542.23 33.37 0.00 0.00 14898.11 2402.99 15825.73 00:22:19.061 07:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.061 07:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:19.061 07:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.626 07:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.626 07:24:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:19.626 07:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.883 07:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:23.160 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.161 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2573607 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2573607 ']' 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2573607 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2573607 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2573607' 00:22:23.418 killing process with pid 2573607 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2573607 00:22:23.418 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2573607 00:22:23.675 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:23.675 07:24:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.933 rmmod nvme_tcp 00:22:23.933 rmmod nvme_fabrics 00:22:23.933 rmmod nvme_keyring 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2570719 ']' 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2570719 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2570719 ']' 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2570719 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2570719 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2570719' 00:22:23.933 killing process with pid 2570719 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2570719 00:22:23.933 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2570719 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.189 07:24:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.751 00:22:26.751 real 0m36.034s 00:22:26.751 user 2m7.489s 00:22:26.751 sys 0m5.993s 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 ************************************ 00:22:26.751 END TEST nvmf_failover 00:22:26.751 ************************************ 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.751 ************************************ 00:22:26.751 START TEST nvmf_host_discovery 00:22:26.751 ************************************ 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:26.751 * Looking for test storage... 00:22:26.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.751 --rc genhtml_branch_coverage=1 00:22:26.751 --rc genhtml_function_coverage=1 00:22:26.751 --rc genhtml_legend=1 00:22:26.751 --rc geninfo_all_blocks=1 00:22:26.751 --rc geninfo_unexecuted_blocks=1 00:22:26.751 00:22:26.751 ' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.751 --rc genhtml_branch_coverage=1 00:22:26.751 --rc genhtml_function_coverage=1 00:22:26.751 --rc genhtml_legend=1 00:22:26.751 --rc geninfo_all_blocks=1 00:22:26.751 --rc geninfo_unexecuted_blocks=1 00:22:26.751 00:22:26.751 ' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.751 --rc genhtml_branch_coverage=1 00:22:26.751 --rc genhtml_function_coverage=1 00:22:26.751 --rc genhtml_legend=1 00:22:26.751 --rc geninfo_all_blocks=1 00:22:26.751 --rc geninfo_unexecuted_blocks=1 00:22:26.751 00:22:26.751 ' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:26.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.751 --rc genhtml_branch_coverage=1 00:22:26.751 --rc genhtml_function_coverage=1 00:22:26.751 --rc genhtml_legend=1 00:22:26.751 --rc geninfo_all_blocks=1 00:22:26.751 --rc geninfo_unexecuted_blocks=1 00:22:26.751 00:22:26.751 ' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:26.751 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.752 07:24:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:28.652 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:28.652 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:28.652 Found net devices under 0000:09:00.0: cvl_0_0 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:28.652 Found net devices under 0000:09:00.1: cvl_0_1 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.652 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.653 07:24:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:22:28.653 00:22:28.653 --- 10.0.0.2 ping statistics --- 00:22:28.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.653 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:22:28.653 00:22:28.653 --- 10.0.0.1 ping statistics --- 00:22:28.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.653 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2577007 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2577007 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2577007 ']' 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:28.653 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.910 [2024-11-20 07:24:32.095585] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:22:28.910 [2024-11-20 07:24:32.095694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.910 [2024-11-20 07:24:32.165360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.910 [2024-11-20 07:24:32.218365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.910 [2024-11-20 07:24:32.218417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.910 [2024-11-20 07:24:32.218431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.910 [2024-11-20 07:24:32.218442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.910 [2024-11-20 07:24:32.218452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.910 [2024-11-20 07:24:32.219032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.910 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:28.910 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:22:28.910 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.910 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.910 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 [2024-11-20 07:24:32.361118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 [2024-11-20 07:24:32.369312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 null0 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 null1 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2577036 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2577036 /tmp/host.sock 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2577036 ']' 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:29.168 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:29.168 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.168 [2024-11-20 07:24:32.442454] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:22:29.168 [2024-11-20 07:24:32.442519] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577036 ] 00:22:29.168 [2024-11-20 07:24:32.506219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.168 [2024-11-20 07:24:32.563116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:29.426 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.427 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.685 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 [2024-11-20 07:24:32.938869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.686 07:24:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:22:29.686 07:24:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:22:30.619 [2024-11-20 07:24:33.739414] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:30.619 [2024-11-20 07:24:33.739438] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:30.619 [2024-11-20 07:24:33.739461] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:30.619 [2024-11-20 07:24:33.826763] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:30.619 [2024-11-20 07:24:34.006921] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:30.619 [2024-11-20 07:24:34.007976] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6a5fa0:1 started. 00:22:30.619 [2024-11-20 07:24:34.009756] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:30.619 [2024-11-20 07:24:34.009775] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:30.877 [2024-11-20 07:24:34.058049] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6a5fa0 was disconnected and freed. delete nvme_qpair. 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.877 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.878 [2024-11-20 07:24:34.269801] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6747a0:1 started. 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:30.878 [2024-11-20 07:24:34.277633] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6747a0 was disconnected and freed. delete nvme_qpair. 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:30.878 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.136 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.137 [2024-11-20 07:24:34.354904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:31.137 [2024-11-20 07:24:34.355131] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:31.137 [2024-11-20 07:24:34.355163] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:31.137 [2024-11-20 07:24:34.442422] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:31.137 07:24:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:22:31.396 [2024-11-20 07:24:34.744154] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:31.396 [2024-11-20 07:24:34.744215] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:31.396 [2024-11-20 07:24:34.744231] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:31.396 [2024-11-20 07:24:34.744239] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:32.329 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.330 [2024-11-20 07:24:35.579388] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:32.330 [2024-11-20 07:24:35.579440] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:32.330 [2024-11-20 07:24:35.581406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.330 [2024-11-20 07:24:35.581457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.330 [2024-11-20 07:24:35.581474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.330 [2024-11-20 07:24:35.581488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.330 [2024-11-20 07:24:35.581517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.330 [2024-11-20 07:24:35.581530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.330 [2024-11-20 07:24:35.581544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.330 [2024-11-20 07:24:35.581557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.330 [2024-11-20 07:24:35.581571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:32.330 [2024-11-20 07:24:35.591399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.330 [2024-11-20 07:24:35.601452] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.330 [2024-11-20 07:24:35.601475] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.330 [2024-11-20 07:24:35.601485] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.330 [2024-11-20 07:24:35.601500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.330 [2024-11-20 07:24:35.601534] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.330 [2024-11-20 07:24:35.601720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.330 [2024-11-20 07:24:35.601749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.330 [2024-11-20 07:24:35.601766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.330 [2024-11-20 07:24:35.601789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.330 [2024-11-20 07:24:35.601811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.330 [2024-11-20 07:24:35.601826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.330 [2024-11-20 07:24:35.601842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.330 [2024-11-20 07:24:35.601855] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.330 [2024-11-20 07:24:35.601864] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.330 [2024-11-20 07:24:35.601872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.330 [2024-11-20 07:24:35.611566] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.330 [2024-11-20 07:24:35.611586] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.330 [2024-11-20 07:24:35.611595] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.330 [2024-11-20 07:24:35.611617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.330 [2024-11-20 07:24:35.611641] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.330 [2024-11-20 07:24:35.611810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.330 [2024-11-20 07:24:35.611838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.330 [2024-11-20 07:24:35.611853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.330 [2024-11-20 07:24:35.611875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.330 [2024-11-20 07:24:35.611895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.330 [2024-11-20 07:24:35.611909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.330 [2024-11-20 07:24:35.611922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.330 [2024-11-20 07:24:35.611934] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.330 [2024-11-20 07:24:35.611943] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.330 [2024-11-20 07:24:35.611950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.330 [2024-11-20 07:24:35.621677] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.330 [2024-11-20 07:24:35.621700] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.330 [2024-11-20 07:24:35.621715] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.330 [2024-11-20 07:24:35.621738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.330 [2024-11-20 07:24:35.621764] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.330 [2024-11-20 07:24:35.621970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.330 [2024-11-20 07:24:35.621997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.330 [2024-11-20 07:24:35.622013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.330 [2024-11-20 07:24:35.622035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.330 [2024-11-20 07:24:35.622056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.330 [2024-11-20 07:24:35.622070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.330 [2024-11-20 07:24:35.622084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.330 [2024-11-20 07:24:35.622096] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.330 [2024-11-20 07:24:35.622105] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.330 [2024-11-20 07:24:35.622112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.330 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.331 [2024-11-20 07:24:35.631799] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.331 [2024-11-20 07:24:35.631837] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.331 [2024-11-20 07:24:35.631845] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.631852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.331 [2024-11-20 07:24:35.631877] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:32.331 [2024-11-20 07:24:35.632080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.331 [2024-11-20 07:24:35.632117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.331 [2024-11-20 07:24:35.632137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.331 [2024-11-20 07:24:35.632159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.331 [2024-11-20 07:24:35.632182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.331 [2024-11-20 07:24:35.632197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.331 [2024-11-20 07:24:35.632211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.331 [2024-11-20 07:24:35.632225] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.331 [2024-11-20 07:24:35.632234] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.331 [2024-11-20 07:24:35.632242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.331 [2024-11-20 07:24:35.641911] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.331 [2024-11-20 07:24:35.641933] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.331 [2024-11-20 07:24:35.641942] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.641950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.331 [2024-11-20 07:24:35.641974] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.642109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.331 [2024-11-20 07:24:35.642137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.331 [2024-11-20 07:24:35.642154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.331 [2024-11-20 07:24:35.642188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.331 [2024-11-20 07:24:35.642212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.331 [2024-11-20 07:24:35.642226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.331 [2024-11-20 07:24:35.642240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.331 [2024-11-20 07:24:35.642253] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.331 [2024-11-20 07:24:35.642262] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.331 [2024-11-20 07:24:35.642270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.331 [2024-11-20 07:24:35.652009] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.331 [2024-11-20 07:24:35.652030] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.331 [2024-11-20 07:24:35.652038] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.652045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.331 [2024-11-20 07:24:35.652068] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.652281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.331 [2024-11-20 07:24:35.652317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.331 [2024-11-20 07:24:35.652335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.331 [2024-11-20 07:24:35.652358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.331 [2024-11-20 07:24:35.652379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.331 [2024-11-20 07:24:35.652393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.331 [2024-11-20 07:24:35.652406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.331 [2024-11-20 07:24:35.652434] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.331 [2024-11-20 07:24:35.652443] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.331 [2024-11-20 07:24:35.652450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.331 [2024-11-20 07:24:35.662101] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.331 [2024-11-20 07:24:35.662121] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.331 [2024-11-20 07:24:35.662129] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.662136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.331 [2024-11-20 07:24:35.662160] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.662272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.331 [2024-11-20 07:24:35.662321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.331 [2024-11-20 07:24:35.662352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.331 [2024-11-20 07:24:35.662375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.331 [2024-11-20 07:24:35.662396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.331 [2024-11-20 07:24:35.662410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.331 [2024-11-20 07:24:35.662424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.331 [2024-11-20 07:24:35.662437] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.331 [2024-11-20 07:24:35.662446] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.331 [2024-11-20 07:24:35.662453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.331 [2024-11-20 07:24:35.672194] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.331 [2024-11-20 07:24:35.672217] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.331 [2024-11-20 07:24:35.672227] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.672238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.331 [2024-11-20 07:24:35.672264] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.331 [2024-11-20 07:24:35.672449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.331 [2024-11-20 07:24:35.672478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.331 [2024-11-20 07:24:35.672495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.331 [2024-11-20 07:24:35.672517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.331 [2024-11-20 07:24:35.672539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.331 [2024-11-20 07:24:35.672553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.331 [2024-11-20 07:24:35.672570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:32.331 [2024-11-20 07:24:35.672583] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.331 [2024-11-20 07:24:35.672592] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.331 [2024-11-20 07:24:35.672599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:22:32.331 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:32.332 [2024-11-20 07:24:35.682314] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.332 [2024-11-20 07:24:35.682337] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.332 [2024-11-20 07:24:35.682362] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.332 [2024-11-20 07:24:35.682370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.332 [2024-11-20 07:24:35.682397] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.332 [2024-11-20 07:24:35.682514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.332 [2024-11-20 07:24:35.682548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.332 [2024-11-20 07:24:35.682566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.332 [2024-11-20 07:24:35.682603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.332 [2024-11-20 07:24:35.682625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.332 [2024-11-20 07:24:35.682639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.332 [2024-11-20 07:24:35.682652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.332 [2024-11-20 07:24:35.682664] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.332 [2024-11-20 07:24:35.682673] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.332 [2024-11-20 07:24:35.682680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.332 [2024-11-20 07:24:35.692433] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.332 [2024-11-20 07:24:35.692455] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.332 [2024-11-20 07:24:35.692465] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.332 [2024-11-20 07:24:35.692488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.332 [2024-11-20 07:24:35.692514] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.332 [2024-11-20 07:24:35.692653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.332 [2024-11-20 07:24:35.692681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.332 [2024-11-20 07:24:35.692697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.332 [2024-11-20 07:24:35.692718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.332 [2024-11-20 07:24:35.692739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.332 [2024-11-20 07:24:35.692753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.332 [2024-11-20 07:24:35.692766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.332 [2024-11-20 07:24:35.692778] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.332 [2024-11-20 07:24:35.692787] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.332 [2024-11-20 07:24:35.692794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.332 [2024-11-20 07:24:35.702549] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.332 [2024-11-20 07:24:35.702570] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.332 [2024-11-20 07:24:35.702593] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.332 [2024-11-20 07:24:35.702600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.332 [2024-11-20 07:24:35.702629] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.332 [2024-11-20 07:24:35.702820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.332 [2024-11-20 07:24:35.702849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x676550 with addr=10.0.0.2, port=4420 00:22:32.332 [2024-11-20 07:24:35.702865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x676550 is same with the state(6) to be set 00:22:32.332 [2024-11-20 07:24:35.702887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x676550 (9): Bad file descriptor 00:22:32.332 [2024-11-20 07:24:35.702908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.332 [2024-11-20 07:24:35.702922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.332 [2024-11-20 07:24:35.702935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.332 [2024-11-20 07:24:35.702947] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.332 [2024-11-20 07:24:35.702956] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.332 [2024-11-20 07:24:35.702963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.332 [2024-11-20 07:24:35.706232] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:32.332 [2024-11-20 07:24:35.706258] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:32.332 07:24:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.706 07:24:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.640 [2024-11-20 07:24:38.009465] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:34.640 [2024-11-20 07:24:38.009494] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:34.640 [2024-11-20 07:24:38.009518] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:34.898 [2024-11-20 07:24:38.095790] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:35.157 [2024-11-20 07:24:38.355177] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:35.157 [2024-11-20 07:24:38.356155] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x673880:1 started. 00:22:35.157 [2024-11-20 07:24:38.358421] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:35.157 [2024-11-20 07:24:38.358472] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:35.157 [2024-11-20 07:24:38.360277] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x673880 was disconnected and freed. delete nvme_qpair. 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.157 request: 00:22:35.157 { 00:22:35.157 "name": "nvme", 00:22:35.157 "trtype": "tcp", 00:22:35.157 "traddr": "10.0.0.2", 00:22:35.157 "adrfam": "ipv4", 00:22:35.157 "trsvcid": "8009", 00:22:35.157 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:35.157 "wait_for_attach": true, 00:22:35.157 "method": "bdev_nvme_start_discovery", 00:22:35.157 "req_id": 1 00:22:35.157 } 00:22:35.157 Got JSON-RPC error response 00:22:35.157 response: 00:22:35.157 { 00:22:35.157 "code": -17, 00:22:35.157 "message": "File exists" 00:22:35.157 } 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.157 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.158 request: 00:22:35.158 { 00:22:35.158 "name": "nvme_second", 00:22:35.158 "trtype": "tcp", 00:22:35.158 "traddr": "10.0.0.2", 00:22:35.158 "adrfam": "ipv4", 00:22:35.158 "trsvcid": "8009", 00:22:35.158 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:35.158 "wait_for_attach": true, 00:22:35.158 "method": "bdev_nvme_start_discovery", 00:22:35.158 "req_id": 1 00:22:35.158 } 00:22:35.158 Got JSON-RPC error response 00:22:35.158 response: 00:22:35.158 { 00:22:35.158 "code": -17, 00:22:35.158 "message": "File exists" 00:22:35.158 } 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.158 07:24:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.531 [2024-11-20 07:24:39.569846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.531 [2024-11-20 07:24:39.569893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x673e70 with addr=10.0.0.2, port=8010 00:22:36.531 [2024-11-20 07:24:39.569924] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:36.531 [2024-11-20 07:24:39.569939] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:36.531 [2024-11-20 07:24:39.569951] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:37.464 [2024-11-20 07:24:40.572380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.464 [2024-11-20 07:24:40.572430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x673e70 with addr=10.0.0.2, port=8010 00:22:37.464 [2024-11-20 07:24:40.572461] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:37.464 [2024-11-20 07:24:40.572476] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:37.464 [2024-11-20 07:24:40.572489] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:38.399 [2024-11-20 07:24:41.574523] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:38.399 request: 00:22:38.399 { 00:22:38.399 "name": "nvme_second", 00:22:38.399 "trtype": "tcp", 00:22:38.399 "traddr": "10.0.0.2", 00:22:38.399 "adrfam": "ipv4", 00:22:38.399 "trsvcid": "8010", 00:22:38.399 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:38.399 "wait_for_attach": false, 00:22:38.399 "attach_timeout_ms": 3000, 00:22:38.399 "method": "bdev_nvme_start_discovery", 00:22:38.399 "req_id": 1 00:22:38.399 } 00:22:38.399 Got JSON-RPC error response 00:22:38.399 response: 00:22:38.399 { 00:22:38.399 "code": -110, 00:22:38.399 "message": "Connection timed out" 00:22:38.399 } 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2577036 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.399 rmmod nvme_tcp 00:22:38.399 rmmod nvme_fabrics 00:22:38.399 rmmod nvme_keyring 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2577007 ']' 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2577007 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2577007 ']' 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2577007 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2577007 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2577007' 00:22:38.399 killing process with pid 2577007 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2577007 00:22:38.399 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2577007 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.658 07:24:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.196 00:22:41.196 real 0m14.409s 00:22:41.196 user 0m21.239s 00:22:41.196 sys 0m2.922s 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.196 ************************************ 00:22:41.196 END TEST nvmf_host_discovery 00:22:41.196 ************************************ 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.196 ************************************ 00:22:41.196 START TEST nvmf_host_multipath_status 00:22:41.196 ************************************ 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:41.196 * Looking for test storage... 00:22:41.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:41.196 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.197 --rc genhtml_branch_coverage=1 00:22:41.197 --rc genhtml_function_coverage=1 00:22:41.197 --rc genhtml_legend=1 00:22:41.197 --rc geninfo_all_blocks=1 00:22:41.197 --rc geninfo_unexecuted_blocks=1 00:22:41.197 00:22:41.197 ' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.197 --rc genhtml_branch_coverage=1 00:22:41.197 --rc genhtml_function_coverage=1 00:22:41.197 --rc genhtml_legend=1 00:22:41.197 --rc geninfo_all_blocks=1 00:22:41.197 --rc geninfo_unexecuted_blocks=1 00:22:41.197 00:22:41.197 ' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.197 --rc genhtml_branch_coverage=1 00:22:41.197 --rc genhtml_function_coverage=1 00:22:41.197 --rc genhtml_legend=1 00:22:41.197 --rc geninfo_all_blocks=1 00:22:41.197 --rc geninfo_unexecuted_blocks=1 00:22:41.197 00:22:41.197 ' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:41.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.197 --rc genhtml_branch_coverage=1 00:22:41.197 --rc genhtml_function_coverage=1 00:22:41.197 --rc genhtml_legend=1 00:22:41.197 --rc geninfo_all_blocks=1 00:22:41.197 --rc geninfo_unexecuted_blocks=1 00:22:41.197 00:22:41.197 ' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.197 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.198 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.198 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.198 07:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:43.163 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:43.163 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:43.163 Found net devices under 0000:09:00.0: cvl_0_0 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:43.163 Found net devices under 0000:09:00.1: cvl_0_1 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.163 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.164 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:22:43.424 00:22:43.424 --- 10.0.0.2 ping statistics --- 00:22:43.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.424 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:43.424 00:22:43.424 --- 10.0.0.1 ping statistics --- 00:22:43.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.424 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2580345 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2580345 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2580345 ']' 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.424 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:43.424 [2024-11-20 07:24:46.693758] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:22:43.424 [2024-11-20 07:24:46.693837] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.424 [2024-11-20 07:24:46.765784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:43.424 [2024-11-20 07:24:46.826409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.424 [2024-11-20 07:24:46.826457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.424 [2024-11-20 07:24:46.826481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.424 [2024-11-20 07:24:46.826493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.424 [2024-11-20 07:24:46.826503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.424 [2024-11-20 07:24:46.827935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.424 [2024-11-20 07:24:46.827941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2580345 00:22:43.683 07:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:43.942 [2024-11-20 07:24:47.211539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.942 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:44.200 Malloc0 00:22:44.200 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:44.458 07:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:44.716 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.974 [2024-11-20 07:24:48.323778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.974 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.233 [2024-11-20 07:24:48.596504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2580522 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2580522 /var/tmp/bdevperf.sock 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2580522 ']' 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.233 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:45.492 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.492 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:22:45.492 07:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:45.750 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:46.354 Nvme0n1 00:22:46.354 07:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:46.919 Nvme0n1 00:22:46.919 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:46.919 07:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:48.815 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:48.815 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:49.073 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:49.330 07:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:50.263 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:50.263 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:50.263 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.263 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.829 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.829 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:50.829 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.829 07:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.829 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.829 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.829 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.829 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:51.087 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.087 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:51.087 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.087 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:51.345 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.345 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:51.345 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.345 07:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:51.910 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:52.168 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:52.739 07:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:53.673 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:53.673 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:53.673 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.673 07:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:53.930 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.930 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:53.930 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.930 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.188 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.188 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.188 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.188 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:54.445 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.445 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:54.445 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.445 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:54.703 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.703 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:54.703 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.703 07:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:54.961 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.961 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:54.961 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.961 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:55.218 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.218 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:55.218 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:55.476 07:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:55.735 07:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.108 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:57.365 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.366 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:57.366 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.366 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:57.624 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.624 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:57.624 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.624 07:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:57.882 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.882 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:57.882 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.882 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.140 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.140 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:58.140 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.140 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:58.398 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.398 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:58.398 07:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:58.656 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:59.222 07:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:00.155 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:00.155 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.155 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.155 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.414 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.414 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:00.414 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.414 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:00.672 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.672 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:00.672 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.672 07:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:00.931 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.932 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:00.932 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.932 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.192 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.192 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.192 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.192 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.451 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.451 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:01.451 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.451 07:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:01.709 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.709 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:01.709 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:01.967 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:02.224 07:25:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:03.157 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:03.157 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:03.157 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.157 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:03.722 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:03.722 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:03.722 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.722 07:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:03.722 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:03.722 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:03.722 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.722 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:03.980 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.980 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:03.980 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.980 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.238 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.238 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:04.238 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.238 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:04.805 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.805 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:04.805 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.805 07:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:04.805 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.805 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:04.805 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:05.063 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:05.321 07:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:06.696 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:06.696 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:06.696 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.696 07:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:06.696 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:06.696 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:06.696 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.696 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:06.953 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.953 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:06.953 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.953 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.211 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.211 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.211 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.211 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.469 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.469 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:07.469 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.469 07:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:07.727 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.727 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:07.727 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.727 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.293 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.293 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:08.293 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:08.293 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:08.860 07:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:08.860 07:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.235 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.493 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.493 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.493 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.493 07:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.751 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.751 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.751 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.751 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.009 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.009 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.009 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.009 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.267 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.267 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.267 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.268 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.526 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.526 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:11.526 07:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:11.784 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:12.043 07:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.418 07:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.675 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.675 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.675 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.675 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:13.933 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.933 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:13.933 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.933 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.191 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.191 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.191 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.191 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.448 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.448 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:14.448 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.448 07:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.705 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.705 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:14.705 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:14.963 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:15.527 07:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:16.458 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:16.458 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.458 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.458 07:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.715 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.715 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:16.715 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.715 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.973 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.973 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.973 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.973 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.231 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.231 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.231 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.231 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.489 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.489 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.489 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.489 07:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.747 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.747 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.747 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.747 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.005 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.005 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:18.005 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:18.262 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:18.519 07:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:19.892 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:19.892 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:19.892 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.892 07:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.892 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.892 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:19.892 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.892 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.150 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.150 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.150 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.150 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.409 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.409 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.409 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.409 07:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.667 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.667 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.667 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.667 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.925 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.925 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:20.925 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.925 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2580522 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2580522 ']' 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2580522 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:21.183 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2580522 00:23:21.441 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:21.441 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:21.441 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2580522' 00:23:21.441 killing process with pid 2580522 00:23:21.441 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2580522 00:23:21.441 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2580522 00:23:21.441 { 00:23:21.441 "results": [ 00:23:21.441 { 00:23:21.441 "job": "Nvme0n1", 00:23:21.441 "core_mask": "0x4", 00:23:21.441 "workload": "verify", 00:23:21.441 "status": "terminated", 00:23:21.441 "verify_range": { 00:23:21.441 "start": 0, 00:23:21.441 "length": 16384 00:23:21.441 }, 00:23:21.441 "queue_depth": 128, 00:23:21.441 "io_size": 4096, 00:23:21.441 "runtime": 34.381936, 00:23:21.441 "iops": 8024.737175940296, 00:23:21.441 "mibps": 31.34662959351678, 00:23:21.441 "io_failed": 0, 00:23:21.441 "io_timeout": 0, 00:23:21.441 "avg_latency_us": 15924.574637148296, 00:23:21.441 "min_latency_us": 1844.717037037037, 00:23:21.441 "max_latency_us": 4026531.84 00:23:21.441 } 00:23:21.441 ], 00:23:21.441 "core_count": 1 00:23:21.441 } 00:23:21.711 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2580522 00:23:21.711 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.711 [2024-11-20 07:24:48.658086] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:23:21.711 [2024-11-20 07:24:48.658192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580522 ] 00:23:21.711 [2024-11-20 07:24:48.726946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.711 [2024-11-20 07:24:48.788150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.711 Running I/O for 90 seconds... 00:23:21.711 8390.00 IOPS, 32.77 MiB/s [2024-11-20T06:25:25.144Z] 8486.00 IOPS, 33.15 MiB/s [2024-11-20T06:25:25.144Z] 8534.67 IOPS, 33.34 MiB/s [2024-11-20T06:25:25.144Z] 8569.50 IOPS, 33.47 MiB/s [2024-11-20T06:25:25.144Z] 8577.20 IOPS, 33.50 MiB/s [2024-11-20T06:25:25.144Z] 8567.83 IOPS, 33.47 MiB/s [2024-11-20T06:25:25.144Z] 8559.43 IOPS, 33.44 MiB/s [2024-11-20T06:25:25.144Z] 8545.25 IOPS, 33.38 MiB/s [2024-11-20T06:25:25.144Z] 8548.67 IOPS, 33.39 MiB/s [2024-11-20T06:25:25.144Z] 8555.90 IOPS, 33.42 MiB/s [2024-11-20T06:25:25.144Z] 8541.36 IOPS, 33.36 MiB/s [2024-11-20T06:25:25.144Z] 8555.17 IOPS, 33.42 MiB/s [2024-11-20T06:25:25.144Z] 8563.00 IOPS, 33.45 MiB/s [2024-11-20T06:25:25.144Z] 8558.00 IOPS, 33.43 MiB/s [2024-11-20T06:25:25.144Z] 8554.47 IOPS, 33.42 MiB/s [2024-11-20T06:25:25.144Z] [2024-11-20 07:25:05.282747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.711 [2024-11-20 07:25:05.282804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.711 [2024-11-20 07:25:05.282874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.711 [2024-11-20 07:25:05.282897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.711 [2024-11-20 07:25:05.282937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.282955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.282978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.282995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.283934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.283972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-11-20 07:25:05.284516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.712 [2024-11-20 07:25:05.284596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.712 [2024-11-20 07:25:05.284612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.284978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.713 [2024-11-20 07:25:05.285525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.285964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.285989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.713 [2024-11-20 07:25:05.286005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.713 [2024-11-20 07:25:05.286030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.714 [2024-11-20 07:25:05.286588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.286973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.286999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.714 [2024-11-20 07:25:05.287886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.714 [2024-11-20 07:25:05.287913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.287929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.287956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.287973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.288973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.288989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.289016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.289032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.289058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.289074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.289102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.289118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.289145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.289160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.289192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.289209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.715 [2024-11-20 07:25:05.289237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.715 [2024-11-20 07:25:05.289253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.715 8049.25 IOPS, 31.44 MiB/s [2024-11-20T06:25:25.148Z] 7575.76 IOPS, 29.59 MiB/s [2024-11-20T06:25:25.148Z] 7154.89 IOPS, 27.95 MiB/s [2024-11-20T06:25:25.148Z] 6778.32 IOPS, 26.48 MiB/s [2024-11-20T06:25:25.148Z] 6842.50 IOPS, 26.73 MiB/s [2024-11-20T06:25:25.148Z] 6927.81 IOPS, 27.06 MiB/s [2024-11-20T06:25:25.148Z] 7024.45 IOPS, 27.44 MiB/s [2024-11-20T06:25:25.148Z] 7197.26 IOPS, 28.11 MiB/s [2024-11-20T06:25:25.148Z] 7357.17 IOPS, 28.74 MiB/s [2024-11-20T06:25:25.148Z] 7508.76 IOPS, 29.33 MiB/s [2024-11-20T06:25:25.148Z] 7543.88 IOPS, 29.47 MiB/s [2024-11-20T06:25:25.149Z] 7576.85 IOPS, 29.60 MiB/s [2024-11-20T06:25:25.149Z] 7609.07 IOPS, 29.72 MiB/s [2024-11-20T06:25:25.149Z] 7691.31 IOPS, 30.04 MiB/s [2024-11-20T06:25:25.149Z] 7806.07 IOPS, 30.49 MiB/s [2024-11-20T06:25:25.149Z] 7917.71 IOPS, 30.93 MiB/s [2024-11-20T06:25:25.149Z] [2024-11-20 07:25:21.911433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.911974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.911991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.716 [2024-11-20 07:25:21.912065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.716 [2024-11-20 07:25:21.912708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.716 [2024-11-20 07:25:21.912731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.912771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.912810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.912849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.912892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.912931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.912970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.912987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.913440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.913684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.717 [2024-11-20 07:25:21.913700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.915347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.915373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.915419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.915441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.915458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.915480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.915497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.915519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.717 [2024-11-20 07:25:21.915536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.717 [2024-11-20 07:25:21.915557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.915966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.915989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.916444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.916460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.917637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.917677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.917718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.917757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.718 [2024-11-20 07:25:21.917917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.917957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.718 [2024-11-20 07:25:21.917980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.718 [2024-11-20 07:25:21.917997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.918051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.918090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.918150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.918200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.918239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.918962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.918980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.919018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.919073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.919111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.919154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.719 [2024-11-20 07:25:21.919194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.719 [2024-11-20 07:25:21.919233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.719 [2024-11-20 07:25:21.919256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.919294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.919320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.919345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.919362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.919385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.919402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.919425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.919443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.920311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.920361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.920400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.920439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.920477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.920522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.920561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.920599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.920660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.920714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.920737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.920754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.922386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.922425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.922464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.922542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.720 [2024-11-20 07:25:21.922580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.720 [2024-11-20 07:25:21.922640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.720 [2024-11-20 07:25:21.922657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.922696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.922734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.922772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.922812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.922855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.922895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.922934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.922973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.922996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.721 [2024-11-20 07:25:21.923606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.923730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.923747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.926384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.926410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.926438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.721 [2024-11-20 07:25:21.926456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.721 [2024-11-20 07:25:21.926479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.926751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.926805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.926844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.926882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.926920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.926958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.926981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.926997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.722 [2024-11-20 07:25:21.927555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.722 [2024-11-20 07:25:21.927658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.722 [2024-11-20 07:25:21.927675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.928782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.928974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.928990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.929017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.929034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.929056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.929072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.929095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.723 [2024-11-20 07:25:21.929126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.929149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.929164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.929189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.929206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.723 [2024-11-20 07:25:21.929244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.723 [2024-11-20 07:25:21.929261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.929299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.929325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.929348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.929365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.929388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.929405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.929427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.929444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.929467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.929484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.930784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.930809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.930837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.930856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.930879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.930896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.930919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.930936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.930959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.930976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.724 [2024-11-20 07:25:21.931697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.724 [2024-11-20 07:25:21.931734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.724 [2024-11-20 07:25:21.931755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.931770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.931791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.931807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.931828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.931843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.931864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.931880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.931901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.931917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.931938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.931954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.931974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.931994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.932033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.932070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.932106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.932143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.932179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.932216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.932252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.932313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.932340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.934477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.934524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.934564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.934630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.934670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.934723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.934760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.934799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.934835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.934872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.934909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.934964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.934987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.935004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.935027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.725 [2024-11-20 07:25:21.935044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.935066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.725 [2024-11-20 07:25:21.935083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.725 [2024-11-20 07:25:21.935110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.935568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.935649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.935665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.936767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.936790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.936816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.936834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.936856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.936872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.936893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.936909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.936948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.936966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.936988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.726 [2024-11-20 07:25:21.937419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.937459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.937482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.937499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.938464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.726 [2024-11-20 07:25:21.938488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.726 [2024-11-20 07:25:21.938515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.938534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.938612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.938651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.938699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.938741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.938781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.938821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.938860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.938939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.938978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.939017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.939096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.939134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.939428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.939467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.939507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.939530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.939548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.940218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.940242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.940270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.940289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.940326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.940363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.940398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.940417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.940439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.727 [2024-11-20 07:25:21.940456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.727 [2024-11-20 07:25:21.940479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.727 [2024-11-20 07:25:21.940496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.942779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.942976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.942999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.943016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.943054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.943093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.943132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.943175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.943216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.943256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.943295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.728 [2024-11-20 07:25:21.943350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.728 [2024-11-20 07:25:21.943372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.728 [2024-11-20 07:25:21.943388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.943411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.943428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.943454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.943472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.944998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.945936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.945974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.945995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.946011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.946032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.946049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.946070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.946086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.946107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.729 [2024-11-20 07:25:21.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.946145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.946162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.729 [2024-11-20 07:25:21.946184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.729 [2024-11-20 07:25:21.946201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.946222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.946243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.946266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.946297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.946329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.946357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.946380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.946398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.948572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.948646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.948703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.948743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.948784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.948824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.948863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.948902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.948946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.948969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.948986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.949026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.949105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.949144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.949198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.949236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.949289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.949322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.949345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.950416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.950461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.950502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.950555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.950595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.950644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.950698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.730 [2024-11-20 07:25:21.950738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.730 [2024-11-20 07:25:21.950793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.730 [2024-11-20 07:25:21.950816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.950861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.950879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.950902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.950923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.950946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.950964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.950986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.951640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.951798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.951815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.952529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.952580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.952632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.952691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.731 [2024-11-20 07:25:21.952734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.952791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.952837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.731 [2024-11-20 07:25:21.952881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.731 [2024-11-20 07:25:21.952904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.952921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.952944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.952961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.952983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.953001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.953023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.953040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.953063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.953080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.953104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.953121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.954898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.954967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.954984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.955023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.955063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.955102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.955156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.732 [2024-11-20 07:25:21.955196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.955234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.732 [2024-11-20 07:25:21.955288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.732 [2024-11-20 07:25:21.955334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.955353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.955392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.955432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.955477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.955518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.955558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.955608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.955663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.955717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.955755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.955776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.955792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.957882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.957905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.957966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.958312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.958354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.958512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.958668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.958711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.733 [2024-11-20 07:25:21.958750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.733 [2024-11-20 07:25:21.958773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.733 [2024-11-20 07:25:21.958790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.958813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.958829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.958852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.958868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.958907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.958930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.958963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.959569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.959621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.959662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.959717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.959777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.959813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.959852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.959905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.959929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.959947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.960832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.960870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.960897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.960930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.960955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.960973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.960996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.734 [2024-11-20 07:25:21.961256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.961295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.734 [2024-11-20 07:25:21.961356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.734 [2024-11-20 07:25:21.961379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.961396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.961436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.961476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.961517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.961600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.961638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.961681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.961705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.961722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.962821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.962845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.962873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.962892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.962916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.962933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.962956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.962973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.962995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.963199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.963334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.963863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.963948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.963987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.735 [2024-11-20 07:25:21.964261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.735 [2024-11-20 07:25:21.964355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.735 [2024-11-20 07:25:21.964383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.964402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.964443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.964482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.964522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.964577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.964630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.964667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.964689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.964705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.965968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.965993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.966037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.966075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.736 [2024-11-20 07:25:21.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.736 [2024-11-20 07:25:21.966424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.736 [2024-11-20 07:25:21.966442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.736 7995.75 IOPS, 31.23 MiB/s [2024-11-20T06:25:25.169Z] 8012.12 IOPS, 31.30 MiB/s [2024-11-20T06:25:25.169Z] 8023.59 IOPS, 31.34 MiB/s [2024-11-20T06:25:25.169Z] Received shutdown signal, test time was about 34.382716 seconds 00:23:21.736 00:23:21.736 Latency(us) 00:23:21.736 [2024-11-20T06:25:25.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.736 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:21.736 Verification LBA range: start 0x0 length 0x4000 00:23:21.736 Nvme0n1 : 34.38 8024.74 31.35 0.00 0.00 15924.57 1844.72 4026531.84 00:23:21.736 [2024-11-20T06:25:25.169Z] =================================================================================================================== 00:23:21.736 [2024-11-20T06:25:25.169Z] Total : 8024.74 31.35 0.00 0.00 15924.57 1844.72 4026531.84 00:23:21.736 07:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.994 rmmod nvme_tcp 00:23:21.994 rmmod nvme_fabrics 00:23:21.994 rmmod nvme_keyring 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2580345 ']' 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2580345 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2580345 ']' 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2580345 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2580345 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2580345' 00:23:21.994 killing process with pid 2580345 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2580345 00:23:21.994 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2580345 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.252 07:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.158 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.158 00:23:24.158 real 0m43.505s 00:23:24.158 user 2m12.016s 00:23:24.158 sys 0m11.000s 00:23:24.158 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.158 07:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:24.158 ************************************ 00:23:24.158 END TEST nvmf_host_multipath_status 00:23:24.158 ************************************ 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.417 ************************************ 00:23:24.417 START TEST nvmf_discovery_remove_ifc 00:23:24.417 ************************************ 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:24.417 * Looking for test storage... 00:23:24.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.417 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:24.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.417 --rc genhtml_branch_coverage=1 00:23:24.417 --rc genhtml_function_coverage=1 00:23:24.417 --rc genhtml_legend=1 00:23:24.417 --rc geninfo_all_blocks=1 00:23:24.417 --rc geninfo_unexecuted_blocks=1 00:23:24.417 00:23:24.417 ' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:24.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.418 --rc genhtml_branch_coverage=1 00:23:24.418 --rc genhtml_function_coverage=1 00:23:24.418 --rc genhtml_legend=1 00:23:24.418 --rc geninfo_all_blocks=1 00:23:24.418 --rc geninfo_unexecuted_blocks=1 00:23:24.418 00:23:24.418 ' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:24.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.418 --rc genhtml_branch_coverage=1 00:23:24.418 --rc genhtml_function_coverage=1 00:23:24.418 --rc genhtml_legend=1 00:23:24.418 --rc geninfo_all_blocks=1 00:23:24.418 --rc geninfo_unexecuted_blocks=1 00:23:24.418 00:23:24.418 ' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:24.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.418 --rc genhtml_branch_coverage=1 00:23:24.418 --rc genhtml_function_coverage=1 00:23:24.418 --rc genhtml_legend=1 00:23:24.418 --rc geninfo_all_blocks=1 00:23:24.418 --rc geninfo_unexecuted_blocks=1 00:23:24.418 00:23:24.418 ' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.418 07:25:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.950 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:26.951 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:26.951 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:26.951 Found net devices under 0000:09:00.0: cvl_0_0 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:26.951 Found net devices under 0000:09:00.1: cvl_0_1 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:23:26.951 00:23:26.951 --- 10.0.0.2 ping statistics --- 00:23:26.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.951 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:23:26.951 00:23:26.951 --- 10.0.0.1 ping statistics --- 00:23:26.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.951 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.951 07:25:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.951 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:26.951 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.951 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.951 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.951 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2586982 00:23:26.951 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2586982 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2586982 ']' 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.952 [2024-11-20 07:25:30.070083] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:23:26.952 [2024-11-20 07:25:30.070175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.952 [2024-11-20 07:25:30.139902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.952 [2024-11-20 07:25:30.195942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.952 [2024-11-20 07:25:30.195998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.952 [2024-11-20 07:25:30.196025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.952 [2024-11-20 07:25:30.196036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.952 [2024-11-20 07:25:30.196046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.952 [2024-11-20 07:25:30.196700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.952 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.952 [2024-11-20 07:25:30.355036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.952 [2024-11-20 07:25:30.363231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:26.952 null0 00:23:27.210 [2024-11-20 07:25:30.395182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2587013 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2587013 /tmp/host.sock 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2587013 ']' 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:27.210 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.210 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.210 [2024-11-20 07:25:30.461169] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:23:27.210 [2024-11-20 07:25:30.461247] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587013 ] 00:23:27.210 [2024-11-20 07:25:30.526104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.210 [2024-11-20 07:25:30.588499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.468 07:25:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.841 [2024-11-20 07:25:31.914417] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:28.841 [2024-11-20 07:25:31.914448] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:28.841 [2024-11-20 07:25:31.914478] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.841 [2024-11-20 07:25:32.000764] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:28.841 [2024-11-20 07:25:32.101672] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:28.841 [2024-11-20 07:25:32.102690] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12d3c00:1 started. 00:23:28.841 [2024-11-20 07:25:32.104424] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:28.841 [2024-11-20 07:25:32.104486] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:28.841 [2024-11-20 07:25:32.104526] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:28.841 [2024-11-20 07:25:32.104548] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:28.841 [2024-11-20 07:25:32.104579] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.841 [2024-11-20 07:25:32.111224] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12d3c00 was disconnected and freed. delete nvme_qpair. 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.841 07:25:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.213 07:25:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.147 07:25:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.080 07:25:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.013 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.013 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.013 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.013 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.014 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.014 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.014 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.014 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.014 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.014 07:25:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.387 07:25:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.387 [2024-11-20 07:25:37.545850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:34.387 [2024-11-20 07:25:37.545929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.387 [2024-11-20 07:25:37.545951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.387 [2024-11-20 07:25:37.545968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.387 [2024-11-20 07:25:37.545980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.387 [2024-11-20 07:25:37.545993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.387 [2024-11-20 07:25:37.546005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.387 [2024-11-20 07:25:37.546018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.387 [2024-11-20 07:25:37.546030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.387 [2024-11-20 07:25:37.546043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.387 [2024-11-20 07:25:37.546055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.387 [2024-11-20 07:25:37.546068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0400 is same with the state(6) to be set 00:23:34.387 [2024-11-20 07:25:37.555870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0400 (9): Bad file descriptor 00:23:34.387 [2024-11-20 07:25:37.565911] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:34.387 [2024-11-20 07:25:37.565932] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:34.387 [2024-11-20 07:25:37.565941] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:34.387 [2024-11-20 07:25:37.565949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:34.387 [2024-11-20 07:25:37.566002] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.322 [2024-11-20 07:25:38.582363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:35.322 [2024-11-20 07:25:38.582441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b0400 with addr=10.0.0.2, port=4420 00:23:35.322 [2024-11-20 07:25:38.582469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0400 is same with the state(6) to be set 00:23:35.322 [2024-11-20 07:25:38.582516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0400 (9): Bad file descriptor 00:23:35.322 [2024-11-20 07:25:38.582994] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:35.322 [2024-11-20 07:25:38.583040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.322 [2024-11-20 07:25:38.583057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.322 [2024-11-20 07:25:38.583073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.322 [2024-11-20 07:25:38.583086] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.322 [2024-11-20 07:25:38.583096] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.322 [2024-11-20 07:25:38.583104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.322 [2024-11-20 07:25:38.583117] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.322 [2024-11-20 07:25:38.583126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:35.322 07:25:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.294 [2024-11-20 07:25:39.585615] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.294 [2024-11-20 07:25:39.585661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.295 [2024-11-20 07:25:39.585689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.295 [2024-11-20 07:25:39.585716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.295 [2024-11-20 07:25:39.585728] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:36.295 [2024-11-20 07:25:39.585740] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.295 [2024-11-20 07:25:39.585748] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.295 [2024-11-20 07:25:39.585755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.295 [2024-11-20 07:25:39.585790] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:36.295 [2024-11-20 07:25:39.585846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.295 [2024-11-20 07:25:39.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.295 [2024-11-20 07:25:39.585889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.295 [2024-11-20 07:25:39.585901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.295 [2024-11-20 07:25:39.585914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.295 [2024-11-20 07:25:39.585926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.295 [2024-11-20 07:25:39.585939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.295 [2024-11-20 07:25:39.585951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.295 [2024-11-20 07:25:39.585964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.295 [2024-11-20 07:25:39.585977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.295 [2024-11-20 07:25:39.585989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:36.295 [2024-11-20 07:25:39.586035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129fb40 (9): Bad file descriptor 00:23:36.295 [2024-11-20 07:25:39.587034] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:36.295 [2024-11-20 07:25:39.587057] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.295 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.605 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:36.605 07:25:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:37.562 07:25:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.496 [2024-11-20 07:25:41.638953] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.496 [2024-11-20 07:25:41.638979] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.496 [2024-11-20 07:25:41.639017] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.496 [2024-11-20 07:25:41.767431] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:38.496 07:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.496 [2024-11-20 07:25:41.868313] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:38.496 [2024-11-20 07:25:41.869178] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x12baa40:1 started. 00:23:38.496 [2024-11-20 07:25:41.870589] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:38.496 [2024-11-20 07:25:41.870649] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:38.496 [2024-11-20 07:25:41.870684] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:38.496 [2024-11-20 07:25:41.870705] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:38.496 [2024-11-20 07:25:41.870717] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.496 [2024-11-20 07:25:41.876835] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x12baa40 was disconnected and freed. delete nvme_qpair. 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.429 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2587013 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2587013 ']' 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2587013 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2587013 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2587013' 00:23:39.687 killing process with pid 2587013 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2587013 00:23:39.687 07:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2587013 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.687 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.945 rmmod nvme_tcp 00:23:39.945 rmmod nvme_fabrics 00:23:39.945 rmmod nvme_keyring 00:23:39.945 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.945 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2586982 ']' 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2586982 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2586982 ']' 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2586982 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2586982 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2586982' 00:23:39.946 killing process with pid 2586982 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2586982 00:23:39.946 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2586982 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.205 07:25:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.114 00:23:42.114 real 0m17.880s 00:23:42.114 user 0m25.952s 00:23:42.114 sys 0m3.090s 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.114 ************************************ 00:23:42.114 END TEST nvmf_discovery_remove_ifc 00:23:42.114 ************************************ 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:42.114 07:25:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.373 ************************************ 00:23:42.373 START TEST nvmf_identify_kernel_target 00:23:42.373 ************************************ 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.373 * Looking for test storage... 00:23:42.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:42.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.373 --rc genhtml_branch_coverage=1 00:23:42.373 --rc genhtml_function_coverage=1 00:23:42.373 --rc genhtml_legend=1 00:23:42.373 --rc geninfo_all_blocks=1 00:23:42.373 --rc geninfo_unexecuted_blocks=1 00:23:42.373 00:23:42.373 ' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:42.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.373 --rc genhtml_branch_coverage=1 00:23:42.373 --rc genhtml_function_coverage=1 00:23:42.373 --rc genhtml_legend=1 00:23:42.373 --rc geninfo_all_blocks=1 00:23:42.373 --rc geninfo_unexecuted_blocks=1 00:23:42.373 00:23:42.373 ' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:42.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.373 --rc genhtml_branch_coverage=1 00:23:42.373 --rc genhtml_function_coverage=1 00:23:42.373 --rc genhtml_legend=1 00:23:42.373 --rc geninfo_all_blocks=1 00:23:42.373 --rc geninfo_unexecuted_blocks=1 00:23:42.373 00:23:42.373 ' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:42.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.373 --rc genhtml_branch_coverage=1 00:23:42.373 --rc genhtml_function_coverage=1 00:23:42.373 --rc genhtml_legend=1 00:23:42.373 --rc geninfo_all_blocks=1 00:23:42.373 --rc geninfo_unexecuted_blocks=1 00:23:42.373 00:23:42.373 ' 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.373 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.374 07:25:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.903 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:44.904 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:44.904 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:44.904 Found net devices under 0000:09:00.0: cvl_0_0 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:44.904 Found net devices under 0000:09:00.1: cvl_0_1 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:23:44.904 00:23:44.904 --- 10.0.0.2 ping statistics --- 00:23:44.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.904 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:23:44.904 00:23:44.904 --- 10.0.0.1 ping statistics --- 00:23:44.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.904 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.904 07:25:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:44.904 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:44.905 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:44.905 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:44.905 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:44.905 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:44.905 07:25:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:45.839 Waiting for block devices as requested 00:23:46.097 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:46.097 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:46.097 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:46.356 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:46.356 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:46.356 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:46.356 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:46.614 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:46.614 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:46.614 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:46.873 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:46.873 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:46.873 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:46.873 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.132 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:47.132 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:47.132 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:47.392 No valid GPT data, bailing 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:47.392 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:47.653 00:23:47.653 Discovery Log Number of Records 2, Generation counter 2 00:23:47.653 =====Discovery Log Entry 0====== 00:23:47.653 trtype: tcp 00:23:47.653 adrfam: ipv4 00:23:47.653 subtype: current discovery subsystem 00:23:47.653 treq: not specified, sq flow control disable supported 00:23:47.653 portid: 1 00:23:47.653 trsvcid: 4420 00:23:47.653 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:47.653 traddr: 10.0.0.1 00:23:47.653 eflags: none 00:23:47.653 sectype: none 00:23:47.653 =====Discovery Log Entry 1====== 00:23:47.653 trtype: tcp 00:23:47.653 adrfam: ipv4 00:23:47.653 subtype: nvme subsystem 00:23:47.653 treq: not specified, sq flow control disable supported 00:23:47.653 portid: 1 00:23:47.653 trsvcid: 4420 00:23:47.653 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:47.653 traddr: 10.0.0.1 00:23:47.653 eflags: none 00:23:47.653 sectype: none 00:23:47.653 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:47.653 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:47.653 ===================================================== 00:23:47.653 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:47.653 ===================================================== 00:23:47.653 Controller Capabilities/Features 00:23:47.653 ================================ 00:23:47.653 Vendor ID: 0000 00:23:47.653 Subsystem Vendor ID: 0000 00:23:47.653 Serial Number: bb2ed33fdfa219fd499e 00:23:47.653 Model Number: Linux 00:23:47.653 Firmware Version: 6.8.9-20 00:23:47.653 Recommended Arb Burst: 0 00:23:47.653 IEEE OUI Identifier: 00 00 00 00:23:47.653 Multi-path I/O 00:23:47.653 May have multiple subsystem ports: No 00:23:47.653 May have multiple controllers: No 00:23:47.653 Associated with SR-IOV VF: No 00:23:47.653 Max Data Transfer Size: Unlimited 00:23:47.653 Max Number of Namespaces: 0 00:23:47.653 Max Number of I/O Queues: 1024 00:23:47.653 NVMe Specification Version (VS): 1.3 00:23:47.653 NVMe Specification Version (Identify): 1.3 00:23:47.653 Maximum Queue Entries: 1024 00:23:47.653 Contiguous Queues Required: No 00:23:47.653 Arbitration Mechanisms Supported 00:23:47.653 Weighted Round Robin: Not Supported 00:23:47.653 Vendor Specific: Not Supported 00:23:47.653 Reset Timeout: 7500 ms 00:23:47.653 Doorbell Stride: 4 bytes 00:23:47.653 NVM Subsystem Reset: Not Supported 00:23:47.653 Command Sets Supported 00:23:47.653 NVM Command Set: Supported 00:23:47.653 Boot Partition: Not Supported 00:23:47.653 Memory Page Size Minimum: 4096 bytes 00:23:47.653 Memory Page Size Maximum: 4096 bytes 00:23:47.653 Persistent Memory Region: Not Supported 00:23:47.653 Optional Asynchronous Events Supported 00:23:47.653 Namespace Attribute Notices: Not Supported 00:23:47.653 Firmware Activation Notices: Not Supported 00:23:47.653 ANA Change Notices: Not Supported 00:23:47.653 PLE Aggregate Log Change Notices: Not Supported 00:23:47.653 LBA Status Info Alert Notices: Not Supported 00:23:47.653 EGE Aggregate Log Change Notices: Not Supported 00:23:47.653 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.653 Zone Descriptor Change Notices: Not Supported 00:23:47.653 Discovery Log Change Notices: Supported 00:23:47.653 Controller Attributes 00:23:47.653 128-bit Host Identifier: Not Supported 00:23:47.653 Non-Operational Permissive Mode: Not Supported 00:23:47.653 NVM Sets: Not Supported 00:23:47.653 Read Recovery Levels: Not Supported 00:23:47.653 Endurance Groups: Not Supported 00:23:47.653 Predictable Latency Mode: Not Supported 00:23:47.653 Traffic Based Keep ALive: Not Supported 00:23:47.653 Namespace Granularity: Not Supported 00:23:47.653 SQ Associations: Not Supported 00:23:47.654 UUID List: Not Supported 00:23:47.654 Multi-Domain Subsystem: Not Supported 00:23:47.654 Fixed Capacity Management: Not Supported 00:23:47.654 Variable Capacity Management: Not Supported 00:23:47.654 Delete Endurance Group: Not Supported 00:23:47.654 Delete NVM Set: Not Supported 00:23:47.654 Extended LBA Formats Supported: Not Supported 00:23:47.654 Flexible Data Placement Supported: Not Supported 00:23:47.654 00:23:47.654 Controller Memory Buffer Support 00:23:47.654 ================================ 00:23:47.654 Supported: No 00:23:47.654 00:23:47.654 Persistent Memory Region Support 00:23:47.654 ================================ 00:23:47.654 Supported: No 00:23:47.654 00:23:47.654 Admin Command Set Attributes 00:23:47.654 ============================ 00:23:47.654 Security Send/Receive: Not Supported 00:23:47.654 Format NVM: Not Supported 00:23:47.654 Firmware Activate/Download: Not Supported 00:23:47.654 Namespace Management: Not Supported 00:23:47.654 Device Self-Test: Not Supported 00:23:47.654 Directives: Not Supported 00:23:47.654 NVMe-MI: Not Supported 00:23:47.654 Virtualization Management: Not Supported 00:23:47.654 Doorbell Buffer Config: Not Supported 00:23:47.654 Get LBA Status Capability: Not Supported 00:23:47.654 Command & Feature Lockdown Capability: Not Supported 00:23:47.654 Abort Command Limit: 1 00:23:47.654 Async Event Request Limit: 1 00:23:47.654 Number of Firmware Slots: N/A 00:23:47.654 Firmware Slot 1 Read-Only: N/A 00:23:47.654 Firmware Activation Without Reset: N/A 00:23:47.654 Multiple Update Detection Support: N/A 00:23:47.654 Firmware Update Granularity: No Information Provided 00:23:47.654 Per-Namespace SMART Log: No 00:23:47.654 Asymmetric Namespace Access Log Page: Not Supported 00:23:47.654 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:47.654 Command Effects Log Page: Not Supported 00:23:47.654 Get Log Page Extended Data: Supported 00:23:47.654 Telemetry Log Pages: Not Supported 00:23:47.654 Persistent Event Log Pages: Not Supported 00:23:47.654 Supported Log Pages Log Page: May Support 00:23:47.654 Commands Supported & Effects Log Page: Not Supported 00:23:47.654 Feature Identifiers & Effects Log Page:May Support 00:23:47.654 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.654 Data Area 4 for Telemetry Log: Not Supported 00:23:47.654 Error Log Page Entries Supported: 1 00:23:47.654 Keep Alive: Not Supported 00:23:47.654 00:23:47.654 NVM Command Set Attributes 00:23:47.654 ========================== 00:23:47.654 Submission Queue Entry Size 00:23:47.654 Max: 1 00:23:47.654 Min: 1 00:23:47.654 Completion Queue Entry Size 00:23:47.654 Max: 1 00:23:47.654 Min: 1 00:23:47.654 Number of Namespaces: 0 00:23:47.654 Compare Command: Not Supported 00:23:47.654 Write Uncorrectable Command: Not Supported 00:23:47.654 Dataset Management Command: Not Supported 00:23:47.654 Write Zeroes Command: Not Supported 00:23:47.654 Set Features Save Field: Not Supported 00:23:47.654 Reservations: Not Supported 00:23:47.654 Timestamp: Not Supported 00:23:47.654 Copy: Not Supported 00:23:47.654 Volatile Write Cache: Not Present 00:23:47.654 Atomic Write Unit (Normal): 1 00:23:47.654 Atomic Write Unit (PFail): 1 00:23:47.654 Atomic Compare & Write Unit: 1 00:23:47.654 Fused Compare & Write: Not Supported 00:23:47.654 Scatter-Gather List 00:23:47.654 SGL Command Set: Supported 00:23:47.654 SGL Keyed: Not Supported 00:23:47.654 SGL Bit Bucket Descriptor: Not Supported 00:23:47.654 SGL Metadata Pointer: Not Supported 00:23:47.654 Oversized SGL: Not Supported 00:23:47.654 SGL Metadata Address: Not Supported 00:23:47.654 SGL Offset: Supported 00:23:47.654 Transport SGL Data Block: Not Supported 00:23:47.654 Replay Protected Memory Block: Not Supported 00:23:47.654 00:23:47.654 Firmware Slot Information 00:23:47.654 ========================= 00:23:47.654 Active slot: 0 00:23:47.654 00:23:47.654 00:23:47.654 Error Log 00:23:47.654 ========= 00:23:47.654 00:23:47.654 Active Namespaces 00:23:47.654 ================= 00:23:47.654 Discovery Log Page 00:23:47.654 ================== 00:23:47.654 Generation Counter: 2 00:23:47.654 Number of Records: 2 00:23:47.654 Record Format: 0 00:23:47.654 00:23:47.654 Discovery Log Entry 0 00:23:47.654 ---------------------- 00:23:47.654 Transport Type: 3 (TCP) 00:23:47.654 Address Family: 1 (IPv4) 00:23:47.654 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:47.654 Entry Flags: 00:23:47.654 Duplicate Returned Information: 0 00:23:47.654 Explicit Persistent Connection Support for Discovery: 0 00:23:47.654 Transport Requirements: 00:23:47.654 Secure Channel: Not Specified 00:23:47.654 Port ID: 1 (0x0001) 00:23:47.654 Controller ID: 65535 (0xffff) 00:23:47.654 Admin Max SQ Size: 32 00:23:47.654 Transport Service Identifier: 4420 00:23:47.654 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:47.654 Transport Address: 10.0.0.1 00:23:47.654 Discovery Log Entry 1 00:23:47.654 ---------------------- 00:23:47.654 Transport Type: 3 (TCP) 00:23:47.654 Address Family: 1 (IPv4) 00:23:47.654 Subsystem Type: 2 (NVM Subsystem) 00:23:47.654 Entry Flags: 00:23:47.654 Duplicate Returned Information: 0 00:23:47.654 Explicit Persistent Connection Support for Discovery: 0 00:23:47.654 Transport Requirements: 00:23:47.654 Secure Channel: Not Specified 00:23:47.654 Port ID: 1 (0x0001) 00:23:47.654 Controller ID: 65535 (0xffff) 00:23:47.654 Admin Max SQ Size: 32 00:23:47.654 Transport Service Identifier: 4420 00:23:47.654 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:47.654 Transport Address: 10.0.0.1 00:23:47.654 07:25:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:47.654 get_feature(0x01) failed 00:23:47.654 get_feature(0x02) failed 00:23:47.654 get_feature(0x04) failed 00:23:47.654 ===================================================== 00:23:47.654 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:47.654 ===================================================== 00:23:47.654 Controller Capabilities/Features 00:23:47.654 ================================ 00:23:47.654 Vendor ID: 0000 00:23:47.654 Subsystem Vendor ID: 0000 00:23:47.654 Serial Number: b3b2dacb084aaabc73a3 00:23:47.654 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:47.654 Firmware Version: 6.8.9-20 00:23:47.654 Recommended Arb Burst: 6 00:23:47.654 IEEE OUI Identifier: 00 00 00 00:23:47.654 Multi-path I/O 00:23:47.654 May have multiple subsystem ports: Yes 00:23:47.654 May have multiple controllers: Yes 00:23:47.654 Associated with SR-IOV VF: No 00:23:47.654 Max Data Transfer Size: Unlimited 00:23:47.654 Max Number of Namespaces: 1024 00:23:47.654 Max Number of I/O Queues: 128 00:23:47.654 NVMe Specification Version (VS): 1.3 00:23:47.654 NVMe Specification Version (Identify): 1.3 00:23:47.654 Maximum Queue Entries: 1024 00:23:47.654 Contiguous Queues Required: No 00:23:47.654 Arbitration Mechanisms Supported 00:23:47.654 Weighted Round Robin: Not Supported 00:23:47.654 Vendor Specific: Not Supported 00:23:47.654 Reset Timeout: 7500 ms 00:23:47.654 Doorbell Stride: 4 bytes 00:23:47.654 NVM Subsystem Reset: Not Supported 00:23:47.654 Command Sets Supported 00:23:47.654 NVM Command Set: Supported 00:23:47.654 Boot Partition: Not Supported 00:23:47.654 Memory Page Size Minimum: 4096 bytes 00:23:47.654 Memory Page Size Maximum: 4096 bytes 00:23:47.654 Persistent Memory Region: Not Supported 00:23:47.654 Optional Asynchronous Events Supported 00:23:47.654 Namespace Attribute Notices: Supported 00:23:47.654 Firmware Activation Notices: Not Supported 00:23:47.654 ANA Change Notices: Supported 00:23:47.654 PLE Aggregate Log Change Notices: Not Supported 00:23:47.654 LBA Status Info Alert Notices: Not Supported 00:23:47.654 EGE Aggregate Log Change Notices: Not Supported 00:23:47.654 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.654 Zone Descriptor Change Notices: Not Supported 00:23:47.654 Discovery Log Change Notices: Not Supported 00:23:47.654 Controller Attributes 00:23:47.654 128-bit Host Identifier: Supported 00:23:47.654 Non-Operational Permissive Mode: Not Supported 00:23:47.654 NVM Sets: Not Supported 00:23:47.654 Read Recovery Levels: Not Supported 00:23:47.654 Endurance Groups: Not Supported 00:23:47.654 Predictable Latency Mode: Not Supported 00:23:47.654 Traffic Based Keep ALive: Supported 00:23:47.654 Namespace Granularity: Not Supported 00:23:47.655 SQ Associations: Not Supported 00:23:47.655 UUID List: Not Supported 00:23:47.655 Multi-Domain Subsystem: Not Supported 00:23:47.655 Fixed Capacity Management: Not Supported 00:23:47.655 Variable Capacity Management: Not Supported 00:23:47.655 Delete Endurance Group: Not Supported 00:23:47.655 Delete NVM Set: Not Supported 00:23:47.655 Extended LBA Formats Supported: Not Supported 00:23:47.655 Flexible Data Placement Supported: Not Supported 00:23:47.655 00:23:47.655 Controller Memory Buffer Support 00:23:47.655 ================================ 00:23:47.655 Supported: No 00:23:47.655 00:23:47.655 Persistent Memory Region Support 00:23:47.655 ================================ 00:23:47.655 Supported: No 00:23:47.655 00:23:47.655 Admin Command Set Attributes 00:23:47.655 ============================ 00:23:47.655 Security Send/Receive: Not Supported 00:23:47.655 Format NVM: Not Supported 00:23:47.655 Firmware Activate/Download: Not Supported 00:23:47.655 Namespace Management: Not Supported 00:23:47.655 Device Self-Test: Not Supported 00:23:47.655 Directives: Not Supported 00:23:47.655 NVMe-MI: Not Supported 00:23:47.655 Virtualization Management: Not Supported 00:23:47.655 Doorbell Buffer Config: Not Supported 00:23:47.655 Get LBA Status Capability: Not Supported 00:23:47.655 Command & Feature Lockdown Capability: Not Supported 00:23:47.655 Abort Command Limit: 4 00:23:47.655 Async Event Request Limit: 4 00:23:47.655 Number of Firmware Slots: N/A 00:23:47.655 Firmware Slot 1 Read-Only: N/A 00:23:47.655 Firmware Activation Without Reset: N/A 00:23:47.655 Multiple Update Detection Support: N/A 00:23:47.655 Firmware Update Granularity: No Information Provided 00:23:47.655 Per-Namespace SMART Log: Yes 00:23:47.655 Asymmetric Namespace Access Log Page: Supported 00:23:47.655 ANA Transition Time : 10 sec 00:23:47.655 00:23:47.655 Asymmetric Namespace Access Capabilities 00:23:47.655 ANA Optimized State : Supported 00:23:47.655 ANA Non-Optimized State : Supported 00:23:47.655 ANA Inaccessible State : Supported 00:23:47.655 ANA Persistent Loss State : Supported 00:23:47.655 ANA Change State : Supported 00:23:47.655 ANAGRPID is not changed : No 00:23:47.655 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:47.655 00:23:47.655 ANA Group Identifier Maximum : 128 00:23:47.655 Number of ANA Group Identifiers : 128 00:23:47.655 Max Number of Allowed Namespaces : 1024 00:23:47.655 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:47.655 Command Effects Log Page: Supported 00:23:47.655 Get Log Page Extended Data: Supported 00:23:47.655 Telemetry Log Pages: Not Supported 00:23:47.655 Persistent Event Log Pages: Not Supported 00:23:47.655 Supported Log Pages Log Page: May Support 00:23:47.655 Commands Supported & Effects Log Page: Not Supported 00:23:47.655 Feature Identifiers & Effects Log Page:May Support 00:23:47.655 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.655 Data Area 4 for Telemetry Log: Not Supported 00:23:47.655 Error Log Page Entries Supported: 128 00:23:47.655 Keep Alive: Supported 00:23:47.655 Keep Alive Granularity: 1000 ms 00:23:47.655 00:23:47.655 NVM Command Set Attributes 00:23:47.655 ========================== 00:23:47.655 Submission Queue Entry Size 00:23:47.655 Max: 64 00:23:47.655 Min: 64 00:23:47.655 Completion Queue Entry Size 00:23:47.655 Max: 16 00:23:47.655 Min: 16 00:23:47.655 Number of Namespaces: 1024 00:23:47.655 Compare Command: Not Supported 00:23:47.655 Write Uncorrectable Command: Not Supported 00:23:47.655 Dataset Management Command: Supported 00:23:47.655 Write Zeroes Command: Supported 00:23:47.655 Set Features Save Field: Not Supported 00:23:47.655 Reservations: Not Supported 00:23:47.655 Timestamp: Not Supported 00:23:47.655 Copy: Not Supported 00:23:47.655 Volatile Write Cache: Present 00:23:47.655 Atomic Write Unit (Normal): 1 00:23:47.655 Atomic Write Unit (PFail): 1 00:23:47.655 Atomic Compare & Write Unit: 1 00:23:47.655 Fused Compare & Write: Not Supported 00:23:47.655 Scatter-Gather List 00:23:47.655 SGL Command Set: Supported 00:23:47.655 SGL Keyed: Not Supported 00:23:47.655 SGL Bit Bucket Descriptor: Not Supported 00:23:47.655 SGL Metadata Pointer: Not Supported 00:23:47.655 Oversized SGL: Not Supported 00:23:47.655 SGL Metadata Address: Not Supported 00:23:47.655 SGL Offset: Supported 00:23:47.655 Transport SGL Data Block: Not Supported 00:23:47.655 Replay Protected Memory Block: Not Supported 00:23:47.655 00:23:47.655 Firmware Slot Information 00:23:47.655 ========================= 00:23:47.655 Active slot: 0 00:23:47.655 00:23:47.655 Asymmetric Namespace Access 00:23:47.655 =========================== 00:23:47.655 Change Count : 0 00:23:47.655 Number of ANA Group Descriptors : 1 00:23:47.655 ANA Group Descriptor : 0 00:23:47.655 ANA Group ID : 1 00:23:47.655 Number of NSID Values : 1 00:23:47.655 Change Count : 0 00:23:47.655 ANA State : 1 00:23:47.655 Namespace Identifier : 1 00:23:47.655 00:23:47.655 Commands Supported and Effects 00:23:47.655 ============================== 00:23:47.655 Admin Commands 00:23:47.655 -------------- 00:23:47.655 Get Log Page (02h): Supported 00:23:47.655 Identify (06h): Supported 00:23:47.655 Abort (08h): Supported 00:23:47.655 Set Features (09h): Supported 00:23:47.655 Get Features (0Ah): Supported 00:23:47.655 Asynchronous Event Request (0Ch): Supported 00:23:47.655 Keep Alive (18h): Supported 00:23:47.655 I/O Commands 00:23:47.655 ------------ 00:23:47.655 Flush (00h): Supported 00:23:47.655 Write (01h): Supported LBA-Change 00:23:47.655 Read (02h): Supported 00:23:47.655 Write Zeroes (08h): Supported LBA-Change 00:23:47.655 Dataset Management (09h): Supported 00:23:47.655 00:23:47.655 Error Log 00:23:47.655 ========= 00:23:47.655 Entry: 0 00:23:47.655 Error Count: 0x3 00:23:47.655 Submission Queue Id: 0x0 00:23:47.655 Command Id: 0x5 00:23:47.655 Phase Bit: 0 00:23:47.655 Status Code: 0x2 00:23:47.655 Status Code Type: 0x0 00:23:47.655 Do Not Retry: 1 00:23:47.655 Error Location: 0x28 00:23:47.655 LBA: 0x0 00:23:47.655 Namespace: 0x0 00:23:47.655 Vendor Log Page: 0x0 00:23:47.655 ----------- 00:23:47.655 Entry: 1 00:23:47.655 Error Count: 0x2 00:23:47.655 Submission Queue Id: 0x0 00:23:47.655 Command Id: 0x5 00:23:47.655 Phase Bit: 0 00:23:47.655 Status Code: 0x2 00:23:47.655 Status Code Type: 0x0 00:23:47.655 Do Not Retry: 1 00:23:47.655 Error Location: 0x28 00:23:47.655 LBA: 0x0 00:23:47.655 Namespace: 0x0 00:23:47.655 Vendor Log Page: 0x0 00:23:47.655 ----------- 00:23:47.655 Entry: 2 00:23:47.655 Error Count: 0x1 00:23:47.655 Submission Queue Id: 0x0 00:23:47.655 Command Id: 0x4 00:23:47.655 Phase Bit: 0 00:23:47.655 Status Code: 0x2 00:23:47.655 Status Code Type: 0x0 00:23:47.655 Do Not Retry: 1 00:23:47.655 Error Location: 0x28 00:23:47.655 LBA: 0x0 00:23:47.655 Namespace: 0x0 00:23:47.655 Vendor Log Page: 0x0 00:23:47.655 00:23:47.655 Number of Queues 00:23:47.655 ================ 00:23:47.655 Number of I/O Submission Queues: 128 00:23:47.655 Number of I/O Completion Queues: 128 00:23:47.655 00:23:47.655 ZNS Specific Controller Data 00:23:47.655 ============================ 00:23:47.655 Zone Append Size Limit: 0 00:23:47.655 00:23:47.655 00:23:47.655 Active Namespaces 00:23:47.655 ================= 00:23:47.655 get_feature(0x05) failed 00:23:47.655 Namespace ID:1 00:23:47.655 Command Set Identifier: NVM (00h) 00:23:47.655 Deallocate: Supported 00:23:47.655 Deallocated/Unwritten Error: Not Supported 00:23:47.655 Deallocated Read Value: Unknown 00:23:47.655 Deallocate in Write Zeroes: Not Supported 00:23:47.655 Deallocated Guard Field: 0xFFFF 00:23:47.655 Flush: Supported 00:23:47.655 Reservation: Not Supported 00:23:47.655 Namespace Sharing Capabilities: Multiple Controllers 00:23:47.655 Size (in LBAs): 1953525168 (931GiB) 00:23:47.655 Capacity (in LBAs): 1953525168 (931GiB) 00:23:47.655 Utilization (in LBAs): 1953525168 (931GiB) 00:23:47.655 UUID: 7802d273-c19a-451e-a788-bc5c44c80ae2 00:23:47.655 Thin Provisioning: Not Supported 00:23:47.655 Per-NS Atomic Units: Yes 00:23:47.655 Atomic Boundary Size (Normal): 0 00:23:47.655 Atomic Boundary Size (PFail): 0 00:23:47.655 Atomic Boundary Offset: 0 00:23:47.655 NGUID/EUI64 Never Reused: No 00:23:47.656 ANA group ID: 1 00:23:47.656 Namespace Write Protected: No 00:23:47.656 Number of LBA Formats: 1 00:23:47.656 Current LBA Format: LBA Format #00 00:23:47.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:47.656 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.656 rmmod nvme_tcp 00:23:47.656 rmmod nvme_fabrics 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.656 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.916 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.916 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.916 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.916 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.916 07:25:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:49.822 07:25:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:51.199 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:51.199 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:51.199 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:52.137 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.395 00:23:52.395 real 0m10.134s 00:23:52.395 user 0m2.243s 00:23:52.395 sys 0m3.845s 00:23:52.395 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:52.395 07:25:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.395 ************************************ 00:23:52.395 END TEST nvmf_identify_kernel_target 00:23:52.396 ************************************ 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.396 ************************************ 00:23:52.396 START TEST nvmf_auth_host 00:23:52.396 ************************************ 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.396 * Looking for test storage... 00:23:52.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:52.396 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.654 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:52.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.654 --rc genhtml_branch_coverage=1 00:23:52.655 --rc genhtml_function_coverage=1 00:23:52.655 --rc genhtml_legend=1 00:23:52.655 --rc geninfo_all_blocks=1 00:23:52.655 --rc geninfo_unexecuted_blocks=1 00:23:52.655 00:23:52.655 ' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.655 --rc genhtml_branch_coverage=1 00:23:52.655 --rc genhtml_function_coverage=1 00:23:52.655 --rc genhtml_legend=1 00:23:52.655 --rc geninfo_all_blocks=1 00:23:52.655 --rc geninfo_unexecuted_blocks=1 00:23:52.655 00:23:52.655 ' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.655 --rc genhtml_branch_coverage=1 00:23:52.655 --rc genhtml_function_coverage=1 00:23:52.655 --rc genhtml_legend=1 00:23:52.655 --rc geninfo_all_blocks=1 00:23:52.655 --rc geninfo_unexecuted_blocks=1 00:23:52.655 00:23:52.655 ' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:52.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.655 --rc genhtml_branch_coverage=1 00:23:52.655 --rc genhtml_function_coverage=1 00:23:52.655 --rc genhtml_legend=1 00:23:52.655 --rc geninfo_all_blocks=1 00:23:52.655 --rc geninfo_unexecuted_blocks=1 00:23:52.655 00:23:52.655 ' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.655 07:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:55.188 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:55.189 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:55.189 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:55.189 Found net devices under 0000:09:00.0: cvl_0_0 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:55.189 Found net devices under 0000:09:00.1: cvl_0_1 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:23:55.189 00:23:55.189 --- 10.0.0.2 ping statistics --- 00:23:55.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.189 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:23:55.189 00:23:55.189 --- 10.0.0.1 ping statistics --- 00:23:55.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.189 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:55.189 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2594241 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2594241 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2594241 ']' 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48437d53d294c5275565fb541274786a 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QcE 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48437d53d294c5275565fb541274786a 0 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48437d53d294c5275565fb541274786a 0 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48437d53d294c5275565fb541274786a 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QcE 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QcE 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QcE 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4ea5f396401241d963e0928a233756ae324d8d0c26d5be75b5e29a5fe55bca5f 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.rAl 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4ea5f396401241d963e0928a233756ae324d8d0c26d5be75b5e29a5fe55bca5f 3 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4ea5f396401241d963e0928a233756ae324d8d0c26d5be75b5e29a5fe55bca5f 3 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4ea5f396401241d963e0928a233756ae324d8d0c26d5be75b5e29a5fe55bca5f 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:55.190 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.rAl 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.rAl 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rAl 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6cd380a382e3699d3b3572a08a8125e5175edc1ff5878ad3 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VSZ 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6cd380a382e3699d3b3572a08a8125e5175edc1ff5878ad3 0 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6cd380a382e3699d3b3572a08a8125e5175edc1ff5878ad3 0 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6cd380a382e3699d3b3572a08a8125e5175edc1ff5878ad3 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VSZ 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VSZ 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.VSZ 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3121ed0e28c460172cb732404c119cae1cc1e85631e16f5d 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LoG 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3121ed0e28c460172cb732404c119cae1cc1e85631e16f5d 2 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3121ed0e28c460172cb732404c119cae1cc1e85631e16f5d 2 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3121ed0e28c460172cb732404c119cae1cc1e85631e16f5d 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LoG 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LoG 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LoG 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64f1bdfbefcb39e7989e4115ada7ae69 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Zik 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64f1bdfbefcb39e7989e4115ada7ae69 1 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64f1bdfbefcb39e7989e4115ada7ae69 1 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64f1bdfbefcb39e7989e4115ada7ae69 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Zik 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Zik 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Zik 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.449 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b3b7938cdc5abc53d4569f35f7da9620 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YaR 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b3b7938cdc5abc53d4569f35f7da9620 1 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b3b7938cdc5abc53d4569f35f7da9620 1 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b3b7938cdc5abc53d4569f35f7da9620 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YaR 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YaR 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.YaR 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f080bfc35d948afe273a500710821d5cd766de2664df02a 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xg8 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f080bfc35d948afe273a500710821d5cd766de2664df02a 2 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f080bfc35d948afe273a500710821d5cd766de2664df02a 2 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f080bfc35d948afe273a500710821d5cd766de2664df02a 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:55.450 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xg8 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xg8 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xg8 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc5413f6b72fcc54d08603dd5f1dda05 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XPh 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc5413f6b72fcc54d08603dd5f1dda05 0 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc5413f6b72fcc54d08603dd5f1dda05 0 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc5413f6b72fcc54d08603dd5f1dda05 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XPh 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XPh 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XPh 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b946e7d3ad6d5cc917847a24f76669d0d9d453978881740aadb5b4013bb43357 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Stw 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b946e7d3ad6d5cc917847a24f76669d0d9d453978881740aadb5b4013bb43357 3 00:23:55.708 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b946e7d3ad6d5cc917847a24f76669d0d9d453978881740aadb5b4013bb43357 3 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b946e7d3ad6d5cc917847a24f76669d0d9d453978881740aadb5b4013bb43357 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Stw 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Stw 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Stw 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2594241 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2594241 ']' 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.709 07:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QcE 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rAl ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rAl 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.VSZ 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LoG ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LoG 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Zik 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.966 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.YaR ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YaR 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xg8 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XPh ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XPh 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Stw 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:55.967 07:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:57.340 Waiting for block devices as requested 00:23:57.340 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.340 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:57.340 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:57.598 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:57.598 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:57.598 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:57.598 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:57.856 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:57.856 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:57.856 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:58.113 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:58.113 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:58.113 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:58.371 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:58.371 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:58.371 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:58.371 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:58.938 No valid GPT data, bailing 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:58.938 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:59.197 00:23:59.197 Discovery Log Number of Records 2, Generation counter 2 00:23:59.197 =====Discovery Log Entry 0====== 00:23:59.197 trtype: tcp 00:23:59.197 adrfam: ipv4 00:23:59.197 subtype: current discovery subsystem 00:23:59.197 treq: not specified, sq flow control disable supported 00:23:59.197 portid: 1 00:23:59.197 trsvcid: 4420 00:23:59.197 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:59.197 traddr: 10.0.0.1 00:23:59.197 eflags: none 00:23:59.197 sectype: none 00:23:59.197 =====Discovery Log Entry 1====== 00:23:59.197 trtype: tcp 00:23:59.197 adrfam: ipv4 00:23:59.197 subtype: nvme subsystem 00:23:59.197 treq: not specified, sq flow control disable supported 00:23:59.197 portid: 1 00:23:59.197 trsvcid: 4420 00:23:59.197 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:59.197 traddr: 10.0.0.1 00:23:59.197 eflags: none 00:23:59.197 sectype: none 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.197 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.198 nvme0n1 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.198 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 nvme0n1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.457 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.458 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.458 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.717 nvme0n1 00:23:59.717 07:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.717 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.718 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.977 nvme0n1 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.977 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.978 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.237 nvme0n1 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.237 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.496 nvme0n1 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.496 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.756 nvme0n1 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.756 07:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.756 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.016 nvme0n1 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.016 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.275 nvme0n1 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.275 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.566 nvme0n1 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.566 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.567 07:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.849 nvme0n1 00:24:01.849 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.850 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.109 nvme0n1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.109 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.367 nvme0n1 00:24:02.367 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.367 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.367 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.368 07:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.626 nvme0n1 00:24:02.626 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.626 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.626 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.626 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.626 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.626 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.884 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.885 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.143 nvme0n1 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.143 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.401 nvme0n1 00:24:03.401 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.401 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.401 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.401 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.401 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.401 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.402 07:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.966 nvme0n1 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:03.966 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.967 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.531 nvme0n1 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.531 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.532 07:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.097 nvme0n1 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.097 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.661 nvme0n1 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.661 07:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.225 nvme0n1 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.225 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.226 07:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.162 nvme0n1 00:24:07.162 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.162 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.162 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.162 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.163 07:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.097 nvme0n1 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.097 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.098 07:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.031 nvme0n1 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.031 07:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.967 nvme0n1 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:09.967 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.968 07:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.902 nvme0n1 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.902 nvme0n1 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.902 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.161 nvme0n1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.161 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.419 nvme0n1 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.419 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.420 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.420 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.420 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.420 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.420 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.678 nvme0n1 00:24:11.678 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.678 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.678 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.678 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.678 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.678 07:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.678 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.936 nvme0n1 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.936 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.937 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.195 nvme0n1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.195 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.454 nvme0n1 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.454 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.713 nvme0n1 00:24:12.713 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.713 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.713 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.713 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.713 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.713 07:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.713 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.971 nvme0n1 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.971 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.972 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.230 nvme0n1 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.230 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.488 nvme0n1 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:13.488 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.489 07:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.746 nvme0n1 00:24:13.746 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.746 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.746 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.746 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.746 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.746 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.004 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.005 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.263 nvme0n1 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.263 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.522 nvme0n1 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.522 07:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.781 nvme0n1 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.781 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.039 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.297 nvme0n1 00:24:15.297 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.297 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.297 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.297 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.297 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.556 07:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.122 nvme0n1 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.122 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.689 nvme0n1 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.689 07:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.254 nvme0n1 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.254 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.255 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.821 nvme0n1 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.821 07:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.821 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.822 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.756 nvme0n1 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.756 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.757 07:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.757 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.691 nvme0n1 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.691 07:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.627 nvme0n1 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.627 07:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 nvme0n1 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.562 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.563 07:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 nvme0n1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 nvme0n1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.498 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.499 07:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.757 nvme0n1 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.757 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.016 nvme0n1 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.016 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.017 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.276 nvme0n1 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.276 nvme0n1 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.276 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.534 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.535 nvme0n1 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.535 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.793 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.793 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.793 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.793 07:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 nvme0n1 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.793 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.052 nvme0n1 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.052 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:24.310 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.311 nvme0n1 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.311 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.569 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.570 nvme0n1 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.570 07:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.828 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.829 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.087 nvme0n1 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.087 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.088 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.346 nvme0n1 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.346 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.347 07:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.605 nvme0n1 00:24:25.605 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.605 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.605 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.605 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.605 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.605 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.864 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.865 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.126 nvme0n1 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.126 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.384 nvme0n1 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.384 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.385 07:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.951 nvme0n1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.951 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.517 nvme0n1 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.518 07:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.084 nvme0n1 00:24:28.084 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.084 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.084 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.084 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.084 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.085 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.657 nvme0n1 00:24:28.657 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.657 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.657 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.657 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.657 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.657 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.658 07:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.658 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.225 nvme0n1 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg0MzdkNTNkMjk0YzUyNzU1NjVmYjU0MTI3NDc4NmHv1/kd: 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGVhNWYzOTY0MDEyNDFkOTYzZTA5MjhhMjMzNzU2YWUzMjRkOGQwYzI2ZDViZTc1YjVlMjlhNWZlNTViY2E1ZsEkuhg=: 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.225 07:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.161 nvme0n1 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.161 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.162 07:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.096 nvme0n1 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.096 07:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.030 nvme0n1 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2YwODBiZmMzNWQ5NDhhZmUyNzNhNTAwNzEwODIxZDVjZDc2NmRlMjY2NGRmMDJhUPWj7Q==: 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmM1NDEzZjZiNzJmY2M1NGQwODYwM2RkNWYxZGRhMDWR0GZP: 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.030 07:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.966 nvme0n1 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjk0NmU3ZDNhZDZkNWNjOTE3ODQ3YTI0Zjc2NjY5ZDBkOWQ0NTM5Nzg4ODE3NDBhYWRiNWI0MDEzYmI0MzM1NwdCLNU=: 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.966 07:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.970 nvme0n1 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.970 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.970 request: 00:24:33.970 { 00:24:33.970 "name": "nvme0", 00:24:33.970 "trtype": "tcp", 00:24:33.970 "traddr": "10.0.0.1", 00:24:33.970 "adrfam": "ipv4", 00:24:33.970 "trsvcid": "4420", 00:24:33.970 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:33.970 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:33.970 "prchk_reftag": false, 00:24:33.970 "prchk_guard": false, 00:24:33.970 "hdgst": false, 00:24:33.970 "ddgst": false, 00:24:33.971 "allow_unrecognized_csi": false, 00:24:33.971 "method": "bdev_nvme_attach_controller", 00:24:33.971 "req_id": 1 00:24:33.971 } 00:24:33.971 Got JSON-RPC error response 00:24:33.971 response: 00:24:33.971 { 00:24:33.971 "code": -5, 00:24:33.971 "message": "Input/output error" 00:24:33.971 } 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.971 request: 00:24:33.971 { 00:24:33.971 "name": "nvme0", 00:24:33.971 "trtype": "tcp", 00:24:33.971 "traddr": "10.0.0.1", 00:24:33.971 "adrfam": "ipv4", 00:24:33.971 "trsvcid": "4420", 00:24:33.971 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:33.971 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:33.971 "prchk_reftag": false, 00:24:33.971 "prchk_guard": false, 00:24:33.971 "hdgst": false, 00:24:33.971 "ddgst": false, 00:24:33.971 "dhchap_key": "key2", 00:24:33.971 "allow_unrecognized_csi": false, 00:24:33.971 "method": "bdev_nvme_attach_controller", 00:24:33.971 "req_id": 1 00:24:33.971 } 00:24:33.971 Got JSON-RPC error response 00:24:33.971 response: 00:24:33.971 { 00:24:33.971 "code": -5, 00:24:33.971 "message": "Input/output error" 00:24:33.971 } 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.971 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.229 request: 00:24:34.229 { 00:24:34.229 "name": "nvme0", 00:24:34.229 "trtype": "tcp", 00:24:34.229 "traddr": "10.0.0.1", 00:24:34.229 "adrfam": "ipv4", 00:24:34.229 "trsvcid": "4420", 00:24:34.229 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:34.229 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:34.229 "prchk_reftag": false, 00:24:34.229 "prchk_guard": false, 00:24:34.229 "hdgst": false, 00:24:34.229 "ddgst": false, 00:24:34.229 "dhchap_key": "key1", 00:24:34.229 "dhchap_ctrlr_key": "ckey2", 00:24:34.229 "allow_unrecognized_csi": false, 00:24:34.229 "method": "bdev_nvme_attach_controller", 00:24:34.229 "req_id": 1 00:24:34.229 } 00:24:34.229 Got JSON-RPC error response 00:24:34.229 response: 00:24:34.229 { 00:24:34.229 "code": -5, 00:24:34.229 "message": "Input/output error" 00:24:34.229 } 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.229 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.229 nvme0n1 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.230 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.488 request: 00:24:34.488 { 00:24:34.488 "name": "nvme0", 00:24:34.488 "dhchap_key": "key1", 00:24:34.488 "dhchap_ctrlr_key": "ckey2", 00:24:34.488 "method": "bdev_nvme_set_keys", 00:24:34.488 "req_id": 1 00:24:34.488 } 00:24:34.488 Got JSON-RPC error response 00:24:34.488 response: 00:24:34.488 { 00:24:34.488 "code": -13, 00:24:34.488 "message": "Permission denied" 00:24:34.488 } 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:34.488 07:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:35.423 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.423 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.423 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.423 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:35.423 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.680 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:35.680 07:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:36.614 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmNkMzgwYTM4MmUzNjk5ZDNiMzU3MmEwOGE4MTI1ZTUxNzVlZGMxZmY1ODc4YWQzDJIY4w==: 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: ]] 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzEyMWVkMGUyOGM0NjAxNzJjYjczMjQwNGMxMTljYWUxY2MxZTg1NjMxZTE2ZjVkT2fTfA==: 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.615 07:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.873 nvme0n1 00:24:36.873 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.873 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:36.873 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.873 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.873 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.873 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjRmMWJkZmJlZmNiMzllNzk4OWU0MTE1YWRhN2FlNjldOq8g: 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: ]] 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjNiNzkzOGNkYzVhYmM1M2Q0NTY5ZjM1ZjdkYTk2MjC0U47Q: 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.874 request: 00:24:36.874 { 00:24:36.874 "name": "nvme0", 00:24:36.874 "dhchap_key": "key2", 00:24:36.874 "dhchap_ctrlr_key": "ckey1", 00:24:36.874 "method": "bdev_nvme_set_keys", 00:24:36.874 "req_id": 1 00:24:36.874 } 00:24:36.874 Got JSON-RPC error response 00:24:36.874 response: 00:24:36.874 { 00:24:36.874 "code": -13, 00:24:36.874 "message": "Permission denied" 00:24:36.874 } 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:36.874 07:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.806 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.806 rmmod nvme_tcp 00:24:38.063 rmmod nvme_fabrics 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2594241 ']' 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2594241 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2594241 ']' 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2594241 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2594241 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2594241' 00:24:38.063 killing process with pid 2594241 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2594241 00:24:38.063 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2594241 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.323 07:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:40.231 07:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:41.609 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:41.609 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:41.609 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:42.549 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:42.807 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QcE /tmp/spdk.key-null.VSZ /tmp/spdk.key-sha256.Zik /tmp/spdk.key-sha384.xg8 /tmp/spdk.key-sha512.Stw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:42.807 07:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:43.745 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:43.745 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:43.745 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:43.745 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:43.745 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:43.745 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:43.745 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:43.745 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:43.745 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:43.745 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:43.745 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:43.745 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:43.745 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:43.745 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:43.745 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:43.745 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:43.745 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:44.004 00:24:44.004 real 0m51.578s 00:24:44.004 user 0m49.219s 00:24:44.004 sys 0m6.265s 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.004 ************************************ 00:24:44.004 END TEST nvmf_auth_host 00:24:44.004 ************************************ 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.004 ************************************ 00:24:44.004 START TEST nvmf_digest 00:24:44.004 ************************************ 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:44.004 * Looking for test storage... 00:24:44.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:24:44.004 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:44.262 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:44.262 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.262 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.262 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.263 --rc genhtml_branch_coverage=1 00:24:44.263 --rc genhtml_function_coverage=1 00:24:44.263 --rc genhtml_legend=1 00:24:44.263 --rc geninfo_all_blocks=1 00:24:44.263 --rc geninfo_unexecuted_blocks=1 00:24:44.263 00:24:44.263 ' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.263 --rc genhtml_branch_coverage=1 00:24:44.263 --rc genhtml_function_coverage=1 00:24:44.263 --rc genhtml_legend=1 00:24:44.263 --rc geninfo_all_blocks=1 00:24:44.263 --rc geninfo_unexecuted_blocks=1 00:24:44.263 00:24:44.263 ' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.263 --rc genhtml_branch_coverage=1 00:24:44.263 --rc genhtml_function_coverage=1 00:24:44.263 --rc genhtml_legend=1 00:24:44.263 --rc geninfo_all_blocks=1 00:24:44.263 --rc geninfo_unexecuted_blocks=1 00:24:44.263 00:24:44.263 ' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:44.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.263 --rc genhtml_branch_coverage=1 00:24:44.263 --rc genhtml_function_coverage=1 00:24:44.263 --rc genhtml_legend=1 00:24:44.263 --rc geninfo_all_blocks=1 00:24:44.263 --rc geninfo_unexecuted_blocks=1 00:24:44.263 00:24:44.263 ' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.263 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.264 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.264 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.264 07:26:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:46.165 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.165 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:46.424 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:46.424 Found net devices under 0000:09:00.0: cvl_0_0 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:46.424 Found net devices under 0000:09:00.1: cvl_0_1 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:46.424 00:24:46.424 --- 10.0.0.2 ping statistics --- 00:24:46.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.424 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:24:46.424 00:24:46.424 --- 10.0.0.1 ping statistics --- 00:24:46.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.424 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.424 ************************************ 00:24:46.424 START TEST nvmf_digest_clean 00:24:46.424 ************************************ 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:46.424 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2603971 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2603971 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2603971 ']' 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.425 07:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.425 [2024-11-20 07:26:49.801845] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:24:46.425 [2024-11-20 07:26:49.801929] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.683 [2024-11-20 07:26:49.873818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.683 [2024-11-20 07:26:49.929794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.683 [2024-11-20 07:26:49.929862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.683 [2024-11-20 07:26:49.929876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.683 [2024-11-20 07:26:49.929887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.683 [2024-11-20 07:26:49.929897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.683 [2024-11-20 07:26:49.930485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.683 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.941 null0 00:24:46.941 [2024-11-20 07:26:50.229979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.941 [2024-11-20 07:26:50.254174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2603997 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2603997 /var/tmp/bperf.sock 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2603997 ']' 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:46.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.941 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.941 [2024-11-20 07:26:50.305037] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:24:46.941 [2024-11-20 07:26:50.305111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603997 ] 00:24:47.199 [2024-11-20 07:26:50.373769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.199 [2024-11-20 07:26:50.433847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.199 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:47.199 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:47.199 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:47.199 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:47.199 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.766 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.766 07:26:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.024 nvme0n1 00:24:48.024 07:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:48.024 07:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.024 Running I/O for 2 seconds... 00:24:50.331 18608.00 IOPS, 72.69 MiB/s [2024-11-20T06:26:53.764Z] 18817.50 IOPS, 73.51 MiB/s 00:24:50.331 Latency(us) 00:24:50.331 [2024-11-20T06:26:53.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.331 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:50.331 nvme0n1 : 2.01 18831.65 73.56 0.00 0.00 6788.54 3446.71 14563.56 00:24:50.331 [2024-11-20T06:26:53.764Z] =================================================================================================================== 00:24:50.331 [2024-11-20T06:26:53.764Z] Total : 18831.65 73.56 0.00 0.00 6788.54 3446.71 14563.56 00:24:50.331 { 00:24:50.331 "results": [ 00:24:50.331 { 00:24:50.331 "job": "nvme0n1", 00:24:50.331 "core_mask": "0x2", 00:24:50.331 "workload": "randread", 00:24:50.331 "status": "finished", 00:24:50.331 "queue_depth": 128, 00:24:50.331 "io_size": 4096, 00:24:50.331 "runtime": 2.00694, 00:24:50.331 "iops": 18831.654160064576, 00:24:50.331 "mibps": 73.56114906275225, 00:24:50.331 "io_failed": 0, 00:24:50.331 "io_timeout": 0, 00:24:50.331 "avg_latency_us": 6788.539801771396, 00:24:50.331 "min_latency_us": 3446.708148148148, 00:24:50.331 "max_latency_us": 14563.555555555555 00:24:50.331 } 00:24:50.331 ], 00:24:50.331 "core_count": 1 00:24:50.331 } 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:50.331 | select(.opcode=="crc32c") 00:24:50.331 | "\(.module_name) \(.executed)"' 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2603997 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2603997 ']' 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2603997 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2603997 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:50.331 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2603997' 00:24:50.332 killing process with pid 2603997 00:24:50.332 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2603997 00:24:50.332 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.332 00:24:50.332 Latency(us) 00:24:50.332 [2024-11-20T06:26:53.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.332 [2024-11-20T06:26:53.765Z] =================================================================================================================== 00:24:50.332 [2024-11-20T06:26:53.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.332 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2603997 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2604504 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2604504 /var/tmp/bperf.sock 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2604504 ']' 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:50.590 07:26:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.590 [2024-11-20 07:26:54.014961] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:24:50.590 [2024-11-20 07:26:54.015046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604504 ] 00:24:50.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:50.590 Zero copy mechanism will not be used. 00:24:50.849 [2024-11-20 07:26:54.082170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.849 [2024-11-20 07:26:54.139140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.849 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:50.849 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:50.849 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:50.849 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:50.849 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:51.415 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.415 07:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.673 nvme0n1 00:24:51.673 07:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:51.673 07:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:51.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:51.931 Zero copy mechanism will not be used. 00:24:51.931 Running I/O for 2 seconds... 00:24:53.797 6060.00 IOPS, 757.50 MiB/s [2024-11-20T06:26:57.230Z] 6030.50 IOPS, 753.81 MiB/s 00:24:53.797 Latency(us) 00:24:53.797 [2024-11-20T06:26:57.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.797 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:53.797 nvme0n1 : 2.00 6031.46 753.93 0.00 0.00 2648.35 676.60 5194.33 00:24:53.797 [2024-11-20T06:26:57.230Z] =================================================================================================================== 00:24:53.797 [2024-11-20T06:26:57.230Z] Total : 6031.46 753.93 0.00 0.00 2648.35 676.60 5194.33 00:24:53.797 { 00:24:53.797 "results": [ 00:24:53.797 { 00:24:53.797 "job": "nvme0n1", 00:24:53.797 "core_mask": "0x2", 00:24:53.797 "workload": "randread", 00:24:53.797 "status": "finished", 00:24:53.797 "queue_depth": 16, 00:24:53.797 "io_size": 131072, 00:24:53.797 "runtime": 2.002334, 00:24:53.797 "iops": 6031.461284680778, 00:24:53.797 "mibps": 753.9326605850972, 00:24:53.797 "io_failed": 0, 00:24:53.797 "io_timeout": 0, 00:24:53.797 "avg_latency_us": 2648.346838158851, 00:24:53.797 "min_latency_us": 676.5985185185185, 00:24:53.797 "max_latency_us": 5194.334814814815 00:24:53.797 } 00:24:53.797 ], 00:24:53.797 "core_count": 1 00:24:53.797 } 00:24:53.797 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:53.797 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:53.797 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:53.797 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:53.797 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:53.797 | select(.opcode=="crc32c") 00:24:53.797 | "\(.module_name) \(.executed)"' 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2604504 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2604504 ']' 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2604504 00:24:54.055 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2604504 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2604504' 00:24:54.313 killing process with pid 2604504 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2604504 00:24:54.313 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.313 00:24:54.313 Latency(us) 00:24:54.313 [2024-11-20T06:26:57.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.313 [2024-11-20T06:26:57.746Z] =================================================================================================================== 00:24:54.313 [2024-11-20T06:26:57.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.313 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2604504 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2604933 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2604933 /var/tmp/bperf.sock 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2604933 ']' 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:54.571 07:26:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.571 [2024-11-20 07:26:57.798011] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:24:54.571 [2024-11-20 07:26:57.798097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604933 ] 00:24:54.571 [2024-11-20 07:26:57.863200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.571 [2024-11-20 07:26:57.918042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.828 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:54.828 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:54.828 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:54.828 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:54.828 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:55.086 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.086 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.653 nvme0n1 00:24:55.653 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:55.653 07:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:55.653 Running I/O for 2 seconds... 00:24:57.956 21614.00 IOPS, 84.43 MiB/s [2024-11-20T06:27:01.389Z] 21625.00 IOPS, 84.47 MiB/s 00:24:57.956 Latency(us) 00:24:57.956 [2024-11-20T06:27:01.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.956 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:57.956 nvme0n1 : 2.01 21651.21 84.58 0.00 0.00 5902.96 2572.89 15825.73 00:24:57.956 [2024-11-20T06:27:01.389Z] =================================================================================================================== 00:24:57.956 [2024-11-20T06:27:01.390Z] Total : 21651.21 84.58 0.00 0.00 5902.96 2572.89 15825.73 00:24:57.957 { 00:24:57.957 "results": [ 00:24:57.957 { 00:24:57.957 "job": "nvme0n1", 00:24:57.957 "core_mask": "0x2", 00:24:57.957 "workload": "randwrite", 00:24:57.957 "status": "finished", 00:24:57.957 "queue_depth": 128, 00:24:57.957 "io_size": 4096, 00:24:57.957 "runtime": 2.007232, 00:24:57.957 "iops": 21651.2092274336, 00:24:57.957 "mibps": 84.5750360446625, 00:24:57.957 "io_failed": 0, 00:24:57.957 "io_timeout": 0, 00:24:57.957 "avg_latency_us": 5902.957933173284, 00:24:57.957 "min_latency_us": 2572.8948148148147, 00:24:57.957 "max_latency_us": 15825.730370370371 00:24:57.957 } 00:24:57.957 ], 00:24:57.957 "core_count": 1 00:24:57.957 } 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:57.957 | select(.opcode=="crc32c") 00:24:57.957 | "\(.module_name) \(.executed)"' 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2604933 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2604933 ']' 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2604933 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2604933 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2604933' 00:24:57.957 killing process with pid 2604933 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2604933 00:24:57.957 Received shutdown signal, test time was about 2.000000 seconds 00:24:57.957 00:24:57.957 Latency(us) 00:24:57.957 [2024-11-20T06:27:01.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.957 [2024-11-20T06:27:01.390Z] =================================================================================================================== 00:24:57.957 [2024-11-20T06:27:01.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.957 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2604933 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2605432 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2605432 /var/tmp/bperf.sock 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2605432 ']' 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.216 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.216 [2024-11-20 07:27:01.610155] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:24:58.216 [2024-11-20 07:27:01.610242] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605432 ] 00:24:58.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.216 Zero copy mechanism will not be used. 00:24:58.475 [2024-11-20 07:27:01.679843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.475 [2024-11-20 07:27:01.738024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.475 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:58.475 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:24:58.475 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:58.475 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.475 07:27:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.040 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.040 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.606 nvme0n1 00:24:59.606 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:59.606 07:27:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:59.606 Zero copy mechanism will not be used. 00:24:59.606 Running I/O for 2 seconds... 00:25:01.912 6515.00 IOPS, 814.38 MiB/s [2024-11-20T06:27:05.345Z] 6514.00 IOPS, 814.25 MiB/s 00:25:01.912 Latency(us) 00:25:01.912 [2024-11-20T06:27:05.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.912 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:01.912 nvme0n1 : 2.00 6510.21 813.78 0.00 0.00 2449.50 1929.67 8835.22 00:25:01.912 [2024-11-20T06:27:05.345Z] =================================================================================================================== 00:25:01.912 [2024-11-20T06:27:05.345Z] Total : 6510.21 813.78 0.00 0.00 2449.50 1929.67 8835.22 00:25:01.912 { 00:25:01.912 "results": [ 00:25:01.912 { 00:25:01.912 "job": "nvme0n1", 00:25:01.912 "core_mask": "0x2", 00:25:01.912 "workload": "randwrite", 00:25:01.912 "status": "finished", 00:25:01.912 "queue_depth": 16, 00:25:01.912 "io_size": 131072, 00:25:01.912 "runtime": 2.004236, 00:25:01.912 "iops": 6510.21137231344, 00:25:01.912 "mibps": 813.77642153918, 00:25:01.912 "io_failed": 0, 00:25:01.912 "io_timeout": 0, 00:25:01.912 "avg_latency_us": 2449.5013847446467, 00:25:01.912 "min_latency_us": 1929.671111111111, 00:25:01.912 "max_latency_us": 8835.223703703703 00:25:01.912 } 00:25:01.912 ], 00:25:01.912 "core_count": 1 00:25:01.912 } 00:25:01.912 07:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:01.912 07:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:01.912 07:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:01.912 07:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:01.912 07:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:01.912 | select(.opcode=="crc32c") 00:25:01.912 | "\(.module_name) \(.executed)"' 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2605432 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2605432 ']' 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2605432 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2605432 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2605432' 00:25:01.912 killing process with pid 2605432 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2605432 00:25:01.912 Received shutdown signal, test time was about 2.000000 seconds 00:25:01.912 00:25:01.912 Latency(us) 00:25:01.912 [2024-11-20T06:27:05.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.912 [2024-11-20T06:27:05.345Z] =================================================================================================================== 00:25:01.912 [2024-11-20T06:27:05.345Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.912 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2605432 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2603971 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2603971 ']' 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2603971 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2603971 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2603971' 00:25:02.169 killing process with pid 2603971 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2603971 00:25:02.169 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2603971 00:25:02.427 00:25:02.427 real 0m15.950s 00:25:02.427 user 0m31.989s 00:25:02.427 sys 0m4.340s 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.427 ************************************ 00:25:02.427 END TEST nvmf_digest_clean 00:25:02.427 ************************************ 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:02.427 ************************************ 00:25:02.427 START TEST nvmf_digest_error 00:25:02.427 ************************************ 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2606006 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2606006 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2606006 ']' 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:02.427 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.428 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:02.428 07:27:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.428 [2024-11-20 07:27:05.805886] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:02.428 [2024-11-20 07:27:05.805959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.686 [2024-11-20 07:27:05.878463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.686 [2024-11-20 07:27:05.936166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.686 [2024-11-20 07:27:05.936220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.686 [2024-11-20 07:27:05.936247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.686 [2024-11-20 07:27:05.936258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.686 [2024-11-20 07:27:05.936275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.686 [2024-11-20 07:27:05.936880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.686 [2024-11-20 07:27:06.069643] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.686 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.945 null0 00:25:02.945 [2024-11-20 07:27:06.187368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.945 [2024-11-20 07:27:06.211612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2606038 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2606038 /var/tmp/bperf.sock 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2606038 ']' 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:02.945 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.945 [2024-11-20 07:27:06.259396] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:02.945 [2024-11-20 07:27:06.259479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606038 ] 00:25:02.945 [2024-11-20 07:27:06.324093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.203 [2024-11-20 07:27:06.382841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.204 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:03.204 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:03.204 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.204 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.462 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:03.462 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.462 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.462 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.462 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.462 07:27:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.720 nvme0n1 00:25:03.720 07:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:03.720 07:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.720 07:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.720 07:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.720 07:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:03.720 07:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.978 Running I/O for 2 seconds... 00:25:03.978 [2024-11-20 07:27:07.267799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.267858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.267879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.284259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.284369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.284389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.296308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.296353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.296379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.311157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.311201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.311218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.326484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.326563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.326583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.338174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.338216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.338250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.351566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.351598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.351615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.363404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.363433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.363449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.377660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.377704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.377720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.388812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.388840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.388872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.978 [2024-11-20 07:27:07.404639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:03.978 [2024-11-20 07:27:07.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.978 [2024-11-20 07:27:07.404714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.236 [2024-11-20 07:27:07.416766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.236 [2024-11-20 07:27:07.416800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.236 [2024-11-20 07:27:07.416832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.236 [2024-11-20 07:27:07.429479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.429510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.442470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.442501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.442518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.454641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.454671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.454703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.468270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.468326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.468372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.479904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.479950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.479966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.494038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.494069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.494100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.507779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.507808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.507838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.521369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.521414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.521431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.533878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.533908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.533938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.549355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.549387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.549404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.561493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.561524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.561540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.577561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.577592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.577607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.591357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.591388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.591405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.603824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.603854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.603885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.618506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.618537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.618554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.633251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.633296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.649368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.649405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.649422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.237 [2024-11-20 07:27:07.664028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.237 [2024-11-20 07:27:07.664076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.237 [2024-11-20 07:27:07.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.676118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.676149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.676166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.691482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.691514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.691530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.703603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.703634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.703665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.718474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.718505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.718520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.731798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.731828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.731860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.744396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.744427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.744444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.758515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.758545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.758561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.772097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.772128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.772160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.784293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.784361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.784426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.798937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.798966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.798996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.813413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.813443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.826601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.826647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.826664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.840978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.841024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.841042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.852437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.852467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.852483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.868727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.868772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.868788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.882952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.882988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.883025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.898358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.898389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.898405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.497 [2024-11-20 07:27:07.914281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.497 [2024-11-20 07:27:07.914333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.497 [2024-11-20 07:27:07.914352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.498 [2024-11-20 07:27:07.926022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.498 [2024-11-20 07:27:07.926054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.498 [2024-11-20 07:27:07.926071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:07.941284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:07.941326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:07.941346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:07.958046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:07.958078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:07.958109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:07.970350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:07.970385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:07.970401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:07.984786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:07.984817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:07.984850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.000792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.000822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.000853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.015612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.015652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.015670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.027184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.027214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.027245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.042828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.042858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.042890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.059261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.059291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.059332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.075404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.075437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.075454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.090677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.090709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.090741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.106870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.106903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.106921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.121336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.789 [2024-11-20 07:27:08.121369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.789 [2024-11-20 07:27:08.121386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.789 [2024-11-20 07:27:08.138236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.790 [2024-11-20 07:27:08.138267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.790 [2024-11-20 07:27:08.138283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.790 [2024-11-20 07:27:08.152720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.790 [2024-11-20 07:27:08.152765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.790 [2024-11-20 07:27:08.152783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.790 [2024-11-20 07:27:08.164809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.790 [2024-11-20 07:27:08.164839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.790 [2024-11-20 07:27:08.164870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.790 [2024-11-20 07:27:08.181393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.790 [2024-11-20 07:27:08.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.790 [2024-11-20 07:27:08.181444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.790 [2024-11-20 07:27:08.196234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:04.790 [2024-11-20 07:27:08.196268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.790 [2024-11-20 07:27:08.196286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.208057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.208090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.208108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.224694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.224727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.224745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.240883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.240914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.240945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 17981.00 IOPS, 70.24 MiB/s [2024-11-20T06:27:08.504Z] [2024-11-20 07:27:08.254855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.254885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.254917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.270568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.270618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.270637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.286742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.286774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.286806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.298710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.298739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.298771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.314951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.314981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.315013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.330393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.330424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.330441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.346385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.346433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.346450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.362460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.362494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.362511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.377041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.377089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.377106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.389984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.390015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.390048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.405797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.405846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.405880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.418049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.418079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.418112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.434186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.434217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.434248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.446296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.446349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.446367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.462456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.071 [2024-11-20 07:27:08.462488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.071 [2024-11-20 07:27:08.462504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.071 [2024-11-20 07:27:08.478836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.072 [2024-11-20 07:27:08.478885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.072 [2024-11-20 07:27:08.478904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.072 [2024-11-20 07:27:08.490747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.072 [2024-11-20 07:27:08.490778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.072 [2024-11-20 07:27:08.490809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.505569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.505644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.505663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.521954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.521986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.522026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.537315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.537347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.537371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.553913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.553960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.553978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.565710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.565740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.565772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.580819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.580869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.580888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.597493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.597541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.609735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.609765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.609797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.625450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.625498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.625516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.640187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.640218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.640251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.657073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.657121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.657154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.671464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.671512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.671529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.683403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.683436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.683452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.700286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.330 [2024-11-20 07:27:08.700327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.330 [2024-11-20 07:27:08.700365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.330 [2024-11-20 07:27:08.712180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.331 [2024-11-20 07:27:08.712225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.331 [2024-11-20 07:27:08.712243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.331 [2024-11-20 07:27:08.729192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.331 [2024-11-20 07:27:08.729223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.331 [2024-11-20 07:27:08.729266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.331 [2024-11-20 07:27:08.744344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.331 [2024-11-20 07:27:08.744378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.331 [2024-11-20 07:27:08.744396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.331 [2024-11-20 07:27:08.756478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.331 [2024-11-20 07:27:08.756509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.331 [2024-11-20 07:27:08.756526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.589 [2024-11-20 07:27:08.770282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.589 [2024-11-20 07:27:08.770339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-11-20 07:27:08.770359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.589 [2024-11-20 07:27:08.783073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.589 [2024-11-20 07:27:08.783106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-11-20 07:27:08.783139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.589 [2024-11-20 07:27:08.795621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.589 [2024-11-20 07:27:08.795651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-11-20 07:27:08.795668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.589 [2024-11-20 07:27:08.809391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.589 [2024-11-20 07:27:08.809424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-11-20 07:27:08.809441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.589 [2024-11-20 07:27:08.822228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.589 [2024-11-20 07:27:08.822259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.822290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.835753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.835783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.835814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.848069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.848115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.848132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.861273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.861323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.861343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.872952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.872982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.873013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.887337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.887376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.887393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.898480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.898511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.898527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.912106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.912136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.912170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.923682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.923742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.939115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.939146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.939177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.952943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.952973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.953004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.965205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.965234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.965265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.978175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.978205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.978237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:08.991454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:08.991484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:08.991500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:09.002843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:09.002873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:09.002903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.590 [2024-11-20 07:27:09.017273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.590 [2024-11-20 07:27:09.017366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-11-20 07:27:09.017461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.032837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.032882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.032900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.044460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.044490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.044506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.060233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.060263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.060294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.075794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.075838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.075856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.088078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.088108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.088139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.101579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.101624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.101641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.113271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.113325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.113351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.127875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.127904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.127935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.139585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.139614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.139630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.153356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.153403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.153419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.167974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.168035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.183072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.183102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.183133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.196643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.196688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.196705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.209510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.209541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.209557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.225584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.225629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.225646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 [2024-11-20 07:27:09.242457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.242510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 18056.50 IOPS, 70.53 MiB/s [2024-11-20T06:27:09.282Z] [2024-11-20 07:27:09.255769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x159a720) 00:25:05.849 [2024-11-20 07:27:09.255812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.849 [2024-11-20 07:27:09.255829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.849 00:25:05.849 Latency(us) 00:25:05.849 [2024-11-20T06:27:09.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.849 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:05.849 nvme0n1 : 2.01 18085.20 70.65 0.00 0.00 7068.20 3543.80 22913.33 00:25:05.849 [2024-11-20T06:27:09.282Z] =================================================================================================================== 00:25:05.849 [2024-11-20T06:27:09.282Z] Total : 18085.20 70.65 0.00 0.00 7068.20 3543.80 22913.33 00:25:05.849 { 00:25:05.849 "results": [ 00:25:05.849 { 00:25:05.849 "job": "nvme0n1", 00:25:05.849 "core_mask": "0x2", 00:25:05.849 "workload": "randread", 00:25:05.849 "status": "finished", 00:25:05.849 "queue_depth": 128, 00:25:05.849 "io_size": 4096, 00:25:05.849 "runtime": 2.007387, 00:25:05.849 "iops": 18085.202305285427, 00:25:05.849 "mibps": 70.6453215050212, 00:25:05.849 "io_failed": 0, 00:25:05.849 "io_timeout": 0, 00:25:05.849 "avg_latency_us": 7068.19882016878, 00:25:05.849 "min_latency_us": 3543.7985185185184, 00:25:05.849 "max_latency_us": 22913.327407407407 00:25:05.849 } 00:25:05.849 ], 00:25:05.849 "core_count": 1 00:25:05.849 } 00:25:05.849 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:05.849 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:05.849 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:05.849 | .driver_specific 00:25:05.849 | .nvme_error 00:25:05.849 | .status_code 00:25:05.849 | .command_transient_transport_error' 00:25:05.849 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2606038 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2606038 ']' 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2606038 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2606038 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2606038' 00:25:06.415 killing process with pid 2606038 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2606038 00:25:06.415 Received shutdown signal, test time was about 2.000000 seconds 00:25:06.415 00:25:06.415 Latency(us) 00:25:06.415 [2024-11-20T06:27:09.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.415 [2024-11-20T06:27:09.848Z] =================================================================================================================== 00:25:06.415 [2024-11-20T06:27:09.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2606038 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2606947 00:25:06.415 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2606947 /var/tmp/bperf.sock 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2606947 ']' 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:06.416 07:27:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.673 [2024-11-20 07:27:09.868034] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:06.673 [2024-11-20 07:27:09.868148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606947 ] 00:25:06.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.673 Zero copy mechanism will not be used. 00:25:06.673 [2024-11-20 07:27:09.940845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.673 [2024-11-20 07:27:10.000959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.929 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.930 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:06.930 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.930 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:07.187 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:07.187 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.187 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.187 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.187 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.187 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.445 nvme0n1 00:25:07.445 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:07.445 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.445 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.445 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.445 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:07.445 07:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.704 Zero copy mechanism will not be used. 00:25:07.704 Running I/O for 2 seconds... 00:25:07.704 [2024-11-20 07:27:10.977990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.704 [2024-11-20 07:27:10.978042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.704 [2024-11-20 07:27:10.978064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.704 [2024-11-20 07:27:10.983160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.704 [2024-11-20 07:27:10.983195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.704 [2024-11-20 07:27:10.983213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.704 [2024-11-20 07:27:10.988571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.704 [2024-11-20 07:27:10.988604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.704 [2024-11-20 07:27:10.988623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.704 [2024-11-20 07:27:10.994553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.704 [2024-11-20 07:27:10.994586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.704 [2024-11-20 07:27:10.994604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.704 [2024-11-20 07:27:11.001224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.001257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.001275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.005016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.005049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.005076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.009430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.009463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.009482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.015058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.015091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.015110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.020615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.020649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.020667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.025836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.025870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.025889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.031875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.031909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.031928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.037730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.037774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.037807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.043236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.043269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.043287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.049283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.049323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.055416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.055455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.055474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.061767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.061798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.061816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.067300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.067342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.067361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.072633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.072666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.072685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.078634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.078667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.078685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.085371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.085404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.091902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.091935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.091953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.097270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.097310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.097331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.102109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.102142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.102161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.105211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.105245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.105263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.111450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.111483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.111502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.116315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.116348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.116367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.120963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.121011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.121029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.126535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.126566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.126584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.705 [2024-11-20 07:27:11.131944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.705 [2024-11-20 07:27:11.131976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.705 [2024-11-20 07:27:11.131994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.135646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.135678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.135696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.140658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.140707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.140725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.145660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.145707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.145731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.150423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.150456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.150474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.154733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.154765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.154783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.159594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.159624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.159641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.164105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.164136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.164170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.168761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.168791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.168809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.173459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.965 [2024-11-20 07:27:11.173489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.965 [2024-11-20 07:27:11.173506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.965 [2024-11-20 07:27:11.178377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.178409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.178427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.182912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.182944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.182962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.187463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.187500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.187533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.192087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.192119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.196612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.196644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.196677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.201133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.201164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.201197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.205710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.205743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.205760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.210533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.210565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.210583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.215186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.215233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.215250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.219666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.219697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.219715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.224081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.224111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.224151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.228581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.228612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.228630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.233685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.233717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.233736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.238812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.238843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.238861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.243379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.243411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.243429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.248125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.248156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.248189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.253879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.253910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.253941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.261757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.261802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.261819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.267842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.267889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.267907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.273339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.273392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.273411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.278790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.278821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.278838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.283108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.283140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.283159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.287768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.287814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.287832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.292392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.292425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.292443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.298591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.298639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.298656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.304856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.304889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.304907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.310199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.966 [2024-11-20 07:27:11.310246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.966 [2024-11-20 07:27:11.310263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.966 [2024-11-20 07:27:11.315367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.315400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.315418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.319856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.319888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.319906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.325568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.325601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.330481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.330513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.330531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.335141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.335190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.339934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.339965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.339998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.345487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.345519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.345538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.350678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.350712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.350745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.357615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.357648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.357667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.365276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.365330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.365357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.372075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.372108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.372126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.380390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.380421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.380439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.387063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.387097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.387116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.967 [2024-11-20 07:27:11.391965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:07.967 [2024-11-20 07:27:11.391998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.967 [2024-11-20 07:27:11.392017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.396493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.396524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.396542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.400951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.400982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.400999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.405453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.405484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.405501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.410136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.410167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.410184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.414719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.414754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.414773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.419299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.419339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.419357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.424867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.424898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.424916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.432497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.432544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.432562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.438706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.438754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.438772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.444542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.444590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.444608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.449991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.450025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.450044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.456097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.456130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.456148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.461723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.461756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.461774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.467037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.467070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.467089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.473136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.473169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.473188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.475861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.475891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.475909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.480195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.480226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.480243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.484589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.484622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.484639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.227 [2024-11-20 07:27:11.489053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.227 [2024-11-20 07:27:11.489082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.227 [2024-11-20 07:27:11.489099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.494706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.494751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.494769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.499626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.499657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.499674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.504083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.504121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.504139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.509092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.509123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.509139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.515091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.515123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.515155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.522857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.522889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.522907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.528832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.528865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.528884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.534737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.534802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.539614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.539647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.539679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.544285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.544325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.544344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.549433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.549470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.549488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.554985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.555018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.555038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.559266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.559299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.559326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.562583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.562616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.562634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.566190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.566222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.566241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.570901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.570949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.570968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.576538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.576570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.576602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.581398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.581430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.581449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.585838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.585871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.590486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.590518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.590556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.595199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.595232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.595250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.599701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.599733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.599750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.604083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.604113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.604131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.608630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.608661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.608678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.613112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.613143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.613161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.617691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.617723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.228 [2024-11-20 07:27:11.617740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.228 [2024-11-20 07:27:11.622191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.228 [2024-11-20 07:27:11.622222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.229 [2024-11-20 07:27:11.622240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.229 [2024-11-20 07:27:11.626646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.229 [2024-11-20 07:27:11.626677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.229 [2024-11-20 07:27:11.626695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.229 [2024-11-20 07:27:11.631887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.229 [2024-11-20 07:27:11.631927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.229 [2024-11-20 07:27:11.631961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.229 [2024-11-20 07:27:11.638620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.229 [2024-11-20 07:27:11.638653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.229 [2024-11-20 07:27:11.638672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.229 [2024-11-20 07:27:11.646003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.229 [2024-11-20 07:27:11.646036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.229 [2024-11-20 07:27:11.646054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.229 [2024-11-20 07:27:11.652102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.229 [2024-11-20 07:27:11.652136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.229 [2024-11-20 07:27:11.652155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.658259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.658292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.658335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.664818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.664870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.671063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.671097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.671116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.675745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.675777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.675796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.679031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.679061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.679078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.684088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.684121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.684139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.688359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.688391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.688410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.693524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.693556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.693575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.698838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.698871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.698890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.703979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.704011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.704030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.709173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.709206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.709224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.714228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.714260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.714278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.719495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.719527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.719545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.724673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.724705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.724729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.729822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.729869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.729888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.734472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.734504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.734538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.739072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.739104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.739122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.744151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.744182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.744200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.489 [2024-11-20 07:27:11.750856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.489 [2024-11-20 07:27:11.750889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.489 [2024-11-20 07:27:11.750907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.757890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.757923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.757958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.764799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.764832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.764851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.772312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.772348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.772367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.780216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.780257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.780276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.787946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.787980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.787998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.795215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.795248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.795266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.803008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.803060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.809950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.809983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.810001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.815856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.815889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.815907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.820987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.821020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.821038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.826333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.826373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.826391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.832192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.832225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.832251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.837986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.838019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.838037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.844580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.844613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.844631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.847581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.847613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.847646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.852457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.852488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.852506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.857407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.857440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.857459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.862569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.862601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.862638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.867594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.867626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.867643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.873854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.873887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.873906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.879927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.879981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.887192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.887225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.887244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.895447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.895495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.895513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.903136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.903182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.903199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.910770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.910804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.910822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.490 [2024-11-20 07:27:11.918489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.490 [2024-11-20 07:27:11.918523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.490 [2024-11-20 07:27:11.918542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.926018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.926084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.933492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.933528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.933557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.941133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.941166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.941184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.948693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.948725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.948743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.956271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.956311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.956332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.963390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.963424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.963442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.971296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.971338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.971371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.749 5633.00 IOPS, 704.12 MiB/s [2024-11-20T06:27:12.182Z] [2024-11-20 07:27:11.980297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.980352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.980372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.988258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.749 [2024-11-20 07:27:11.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.749 [2024-11-20 07:27:11.988332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.749 [2024-11-20 07:27:11.996249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:11.996283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:11.996309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.003197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.003232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.003250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.008823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.008858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.008884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.015171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.015205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.015223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.021770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.021805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.021823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.027997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.028030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.033638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.033673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.033701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.038201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.038233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.038251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.042805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.042838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.042856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.047544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.047593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.047610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.053127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.053158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.053176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.057872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.057911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.057929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.062570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.062601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.062619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.067197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.067227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.067244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.071846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.071877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.071894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.076403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.076434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.076452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.080932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.080963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.080980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.085540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.085572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.085590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.090158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.090190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.090207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.094716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.094747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.094765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.099183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.099214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.099232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.103757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.103789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.103806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.108266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.108298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.108327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.113886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.113919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.113937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.118766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.118797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.118816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.123463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.123495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.123513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.128215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.750 [2024-11-20 07:27:12.128246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.750 [2024-11-20 07:27:12.128264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.750 [2024-11-20 07:27:12.133133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.133165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.133184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.751 [2024-11-20 07:27:12.138338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.138386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.138406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.751 [2024-11-20 07:27:12.143840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.143872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.143890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.751 [2024-11-20 07:27:12.150562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.150595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.751 [2024-11-20 07:27:12.158658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.158691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.158709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.751 [2024-11-20 07:27:12.165604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.165638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.165657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.751 [2024-11-20 07:27:12.173621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:08.751 [2024-11-20 07:27:12.173656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.751 [2024-11-20 07:27:12.173683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.181344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.181378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.181396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.188446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.188480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.188498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.194002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.194036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.194054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.200201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.200235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.200253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.205100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.205132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.205150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.209617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.209647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.209664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.214219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.010 [2024-11-20 07:27:12.214251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.010 [2024-11-20 07:27:12.214269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.010 [2024-11-20 07:27:12.218721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.218752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.218769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.223289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.223331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.227887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.227935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.232611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.232642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.232660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.237195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.237227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.237252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.242535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.242567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.242584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.247968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.248001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.248020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.251974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.252006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.252024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.257498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.257532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.257550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.264992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.265025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.265043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.271277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.271320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.271341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.278203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.278237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.278272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.284829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.284862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.284879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.291186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.291227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.291247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.295785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.295817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.295835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.300432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.300464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.300481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.304758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.304790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.304808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.309416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.309448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.309465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.313972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.314004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.314022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.318421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.318451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.318469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.322994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.323024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.323040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.327578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.327624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.327641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.332156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.332187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.332204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.336761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.336792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.336823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.341429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.341477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.341494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.346086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.346131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.346147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.011 [2024-11-20 07:27:12.350711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.011 [2024-11-20 07:27:12.350741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.011 [2024-11-20 07:27:12.350758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.355874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.355907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.355926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.360647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.360678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.360696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.365349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.365382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.365399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.369667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.369711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.369738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.374657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.374689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.374707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.380267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.380300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.380342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.386087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.386120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.386154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.391370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.391403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.391421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.396973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.397005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.402620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.402654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.402673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.408085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.408117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.408136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.413200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.413233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.413251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.416507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.416547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.416566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.422808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.422841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.422859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.428877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.428910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.428927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.434667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.434698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.434716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.012 [2024-11-20 07:27:12.440044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.012 [2024-11-20 07:27:12.440077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.012 [2024-11-20 07:27:12.440095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.446843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.446876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.454324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.454376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.461452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.461486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.461504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.467083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.467116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.467141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.473241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.473274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.473292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.479034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.479068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.479087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.485019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.485067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.485086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.492198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.492232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.492250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.271 [2024-11-20 07:27:12.499098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.271 [2024-11-20 07:27:12.499147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.271 [2024-11-20 07:27:12.499165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.506915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.506948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.506967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.514795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.514828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.514847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.522921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.522954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.522972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.530131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.530172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.530191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.536696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.536735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.536753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.544131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.544164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.544182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.551022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.551055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.551073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.556410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.556442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.556461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.561512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.561544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.561563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.566583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.566615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.566634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.571172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.571204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.571221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.576230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.576263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.576281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.581846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.581879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.581897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.587576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.587609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.587627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.592771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.592804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.592822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.598345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.598378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.598396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.603988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.604022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.604040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.609937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.609989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.615944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.615977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.615995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.621915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.621948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.621966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.627856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.627889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.627914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.633535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.633568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.633586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.639535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.639568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.272 [2024-11-20 07:27:12.639587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.272 [2024-11-20 07:27:12.645174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.272 [2024-11-20 07:27:12.645208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.645227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.650246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.650278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.650296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.655869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.655902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.655920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.661565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.661598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.661616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.666705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.666737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.666755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.672366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.672398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.672417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.678441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.678480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.678499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.684341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.684375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.684393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.689703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.689736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.689754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.695289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.695330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.695349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.273 [2024-11-20 07:27:12.700366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.273 [2024-11-20 07:27:12.700398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.273 [2024-11-20 07:27:12.700417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.706299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.706338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.706356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.712110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.712144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.712162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.716076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.716108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.716125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.720529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.720561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.720580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.725712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.725746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.725764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.730350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.730383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.730401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.735471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.735503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.735522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.740731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.740763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.740781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.745492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.745524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.745542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.750795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.750827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.750845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.756110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.756144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.756162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.761904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.761939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.761972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.768250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.768283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.768317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.774182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.774230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.774248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.780419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.780452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.780486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.786715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.786749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.532 [2024-11-20 07:27:12.786768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.532 [2024-11-20 07:27:12.792048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.532 [2024-11-20 07:27:12.792082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.792116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.797144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.797175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.797193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.801594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.801625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.801642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.805977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.806009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.806047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.810680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.810724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.810741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.815367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.815412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.815430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.819884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.819915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.819949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.824644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.824674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.824693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.830228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.830260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.830294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.837700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.837733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.837752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.844153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.844186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.844204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.849526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.849559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.849577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.854976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.855022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.855040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.861692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.861726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.861751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.866986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.867019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.867038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.872311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.872343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.872362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.877275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.877314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.877335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.881992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.882024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.882041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.886544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.886575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.886593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.892132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.892180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.892197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.899408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.899455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.899473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.906171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.906203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.906222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.912001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.912040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.912060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.918234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.918268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.918287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.924709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.924742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.929717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.929749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.929767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.933466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.533 [2024-11-20 07:27:12.933498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.533 [2024-11-20 07:27:12.933517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.533 [2024-11-20 07:27:12.939245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.534 [2024-11-20 07:27:12.939278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.534 [2024-11-20 07:27:12.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.534 [2024-11-20 07:27:12.945557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.534 [2024-11-20 07:27:12.945605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.534 [2024-11-20 07:27:12.945623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.534 [2024-11-20 07:27:12.952292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.534 [2024-11-20 07:27:12.952332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.534 [2024-11-20 07:27:12.952359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.534 [2024-11-20 07:27:12.959074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.534 [2024-11-20 07:27:12.959108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.534 [2024-11-20 07:27:12.959127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.792 [2024-11-20 07:27:12.965462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.792 [2024-11-20 07:27:12.965507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.792 [2024-11-20 07:27:12.965524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.792 [2024-11-20 07:27:12.971787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.792 [2024-11-20 07:27:12.971820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.792 [2024-11-20 07:27:12.971839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.792 [2024-11-20 07:27:12.978147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14fbdc0) 00:25:09.792 [2024-11-20 07:27:12.978180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.792 [2024-11-20 07:27:12.978213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.792 5596.50 IOPS, 699.56 MiB/s 00:25:09.792 Latency(us) 00:25:09.792 [2024-11-20T06:27:13.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.792 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:09.792 nvme0n1 : 2.00 5594.42 699.30 0.00 0.00 2855.89 728.18 9223.59 00:25:09.792 [2024-11-20T06:27:13.225Z] =================================================================================================================== 00:25:09.792 [2024-11-20T06:27:13.225Z] Total : 5594.42 699.30 0.00 0.00 2855.89 728.18 9223.59 00:25:09.792 { 00:25:09.792 "results": [ 00:25:09.792 { 00:25:09.792 "job": "nvme0n1", 00:25:09.792 "core_mask": "0x2", 00:25:09.792 "workload": "randread", 00:25:09.792 "status": "finished", 00:25:09.792 "queue_depth": 16, 00:25:09.792 "io_size": 131072, 00:25:09.792 "runtime": 2.003782, 00:25:09.792 "iops": 5594.420949983581, 00:25:09.792 "mibps": 699.3026187479476, 00:25:09.792 "io_failed": 0, 00:25:09.792 "io_timeout": 0, 00:25:09.792 "avg_latency_us": 2855.8946231869695, 00:25:09.792 "min_latency_us": 728.1777777777778, 00:25:09.792 "max_latency_us": 9223.585185185186 00:25:09.792 } 00:25:09.792 ], 00:25:09.792 "core_count": 1 00:25:09.792 } 00:25:09.792 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:09.792 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:09.792 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:09.792 07:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:09.792 | .driver_specific 00:25:09.792 | .nvme_error 00:25:09.792 | .status_code 00:25:09.792 | .command_transient_transport_error' 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 362 > 0 )) 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2606947 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2606947 ']' 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2606947 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.050 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2606947 00:25:10.051 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:10.051 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:10.051 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2606947' 00:25:10.051 killing process with pid 2606947 00:25:10.051 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2606947 00:25:10.051 Received shutdown signal, test time was about 2.000000 seconds 00:25:10.051 00:25:10.051 Latency(us) 00:25:10.051 [2024-11-20T06:27:13.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.051 [2024-11-20T06:27:13.484Z] =================================================================================================================== 00:25:10.051 [2024-11-20T06:27:13.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.051 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2606947 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2607473 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2607473 /var/tmp/bperf.sock 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2607473 ']' 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:10.309 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.309 [2024-11-20 07:27:13.576321] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:10.309 [2024-11-20 07:27:13.576410] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607473 ] 00:25:10.309 [2024-11-20 07:27:13.641253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.309 [2024-11-20 07:27:13.696111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.567 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:10.567 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:10.567 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.567 07:27:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.825 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:10.825 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.825 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.825 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.825 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.825 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.083 nvme0n1 00:25:11.083 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:11.083 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.083 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.083 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.083 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:11.083 07:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.342 Running I/O for 2 seconds... 00:25:11.342 [2024-11-20 07:27:14.577896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ee1f80 00:25:11.342 [2024-11-20 07:27:14.579090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.579135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.589374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ee3d08 00:25:11.342 [2024-11-20 07:27:14.590355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.590412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.600734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef81e0 00:25:11.342 [2024-11-20 07:27:14.601532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.601565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.613084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef2d80 00:25:11.342 [2024-11-20 07:27:14.614166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.614212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.625546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef3a28 00:25:11.342 [2024-11-20 07:27:14.626861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.626907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.639996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef0350 00:25:11.342 [2024-11-20 07:27:14.641948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.641994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.648512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef6020 00:25:11.342 [2024-11-20 07:27:14.649525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.649555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.660802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef7da8 00:25:11.342 [2024-11-20 07:27:14.661955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.662000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.674921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ee38d0 00:25:11.342 [2024-11-20 07:27:14.676635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.683392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016ef2d80 00:25:11.342 [2024-11-20 07:27:14.684216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.342 [2024-11-20 07:27:14.684261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:11.342 [2024-11-20 07:27:14.695443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eff3c8 00:25:11.343 [2024-11-20 07:27:14.696274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.696330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:11.343 [2024-11-20 07:27:14.710016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eebfd0 00:25:11.343 [2024-11-20 07:27:14.711435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.711481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.343 [2024-11-20 07:27:14.722145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016efdeb0 00:25:11.343 [2024-11-20 07:27:14.723934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.723979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:11.343 [2024-11-20 07:27:14.734167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eefae0 00:25:11.343 [2024-11-20 07:27:14.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.735994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:11.343 [2024-11-20 07:27:14.742157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eecc78 00:25:11.343 [2024-11-20 07:27:14.743003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.743046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:11.343 [2024-11-20 07:27:14.754452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016efef90 00:25:11.343 [2024-11-20 07:27:14.755409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.755454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:11.343 [2024-11-20 07:27:14.768816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eed0b0 00:25:11.343 [2024-11-20 07:27:14.770405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.343 [2024-11-20 07:27:14.770451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.777897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016efe2e8 00:25:11.601 [2024-11-20 07:27:14.778762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.778807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.791252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eebb98 00:25:11.601 [2024-11-20 07:27:14.792226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.792270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.803444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.803655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.803682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.817489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.817778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.817823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.831654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.831934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.831962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.845405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.845627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.845668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.859348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.859559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.859601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.873441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.873714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.873757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.887454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.601 [2024-11-20 07:27:14.887657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.601 [2024-11-20 07:27:14.887699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.601 [2024-11-20 07:27:14.901311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.901514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.901559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.915314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.915536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.915582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.929412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.929619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.929661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.943511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.943711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.943751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.957487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.957717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.957759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.971492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.971770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.971813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.985645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.985814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.985840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:14.999567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:14.999832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:14.999875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:15.013564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:15.013761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:15.013804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.602 [2024-11-20 07:27:15.027437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.602 [2024-11-20 07:27:15.027633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.602 [2024-11-20 07:27:15.027661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.860 [2024-11-20 07:27:15.041286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.860 [2024-11-20 07:27:15.041526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.860 [2024-11-20 07:27:15.041556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.860 [2024-11-20 07:27:15.055266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.860 [2024-11-20 07:27:15.055533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.860 [2024-11-20 07:27:15.055563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.860 [2024-11-20 07:27:15.069241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.069445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.069487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.083174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.083398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.083432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.096914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.097141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.097185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.110772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.110975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.111003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.124774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.124978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.125021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.138725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.138994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.139021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.152741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.153000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.153046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.166625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.166823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.166849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.180676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.180888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.180930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.194663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.194939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.194983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.208803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.209080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.209122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.222685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.222861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.222905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.236619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.236847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.236892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.250667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.250887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.250931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.264747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.264996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.861 [2024-11-20 07:27:15.278687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:11.861 [2024-11-20 07:27:15.278895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.861 [2024-11-20 07:27:15.278935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.292832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.293056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.293101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.306801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.307023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.307065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.320708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.320918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.320960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.334662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.334955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.334982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.348224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.348433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.348461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.362212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.362414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.362441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.376270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.376500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.376546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.390350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.390562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.390607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.404273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.404532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.404575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.418354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.418522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.418548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.432296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.432483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.119 [2024-11-20 07:27:15.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.119 [2024-11-20 07:27:15.446255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.119 [2024-11-20 07:27:15.446542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.446594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.460167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.460385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.460427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.474237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.474466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.474510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.488205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.488409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.488450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.502254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.502550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.502596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.516114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.516339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.516381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.530215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.530463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.120 [2024-11-20 07:27:15.544358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.120 [2024-11-20 07:27:15.544590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.120 [2024-11-20 07:27:15.544639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.378 [2024-11-20 07:27:15.558222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.378 [2024-11-20 07:27:15.558456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.378 [2024-11-20 07:27:15.558500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.378 19003.00 IOPS, 74.23 MiB/s [2024-11-20T06:27:15.811Z] [2024-11-20 07:27:15.572215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.378 [2024-11-20 07:27:15.572430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.378 [2024-11-20 07:27:15.572461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.378 [2024-11-20 07:27:15.586006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.378 [2024-11-20 07:27:15.586222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.378 [2024-11-20 07:27:15.586264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.378 [2024-11-20 07:27:15.599678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.378 [2024-11-20 07:27:15.599868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.378 [2024-11-20 07:27:15.599895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.378 [2024-11-20 07:27:15.613533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.613810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.613852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.627565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.627842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.627885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.641695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.641904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.641947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.655790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.655995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.656039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.669805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.670012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.670039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.683841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.684010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.684038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.697890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.698130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.698173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.711657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.711899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.711943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.725627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.725879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.725924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.739504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.739718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.739747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.753491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.753791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.753821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.767419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.767621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.767662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.781491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.781748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.781777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.795379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.795669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.379 [2024-11-20 07:27:15.809377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.379 [2024-11-20 07:27:15.809538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.379 [2024-11-20 07:27:15.809572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.637 [2024-11-20 07:27:15.823061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.823247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.823273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.837078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.837338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.837366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.850917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.851185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.851230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.864783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.865016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.865062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.878761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.878996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.879041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.892736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.892982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.893027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.906656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.906941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.906971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.920639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.920835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.920879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.934555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.934781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.934825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.948505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.948739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.948784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.962388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.962630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.962659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.976377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.976582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.976609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:15.989978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:15.990178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:15.990206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:16.003769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:16.004107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:16.004152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:16.017557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:16.017789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:16.017818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:16.031442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:16.031632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:16.031659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:16.045227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:16.045472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:16.045502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.638 [2024-11-20 07:27:16.059258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.638 [2024-11-20 07:27:16.059468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.638 [2024-11-20 07:27:16.059497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.072714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.072918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.072945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.086582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.086809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.086854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.100416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.100615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.100643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.113940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.114165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.114219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.127599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.127836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.127866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.141427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.141639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.141669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.155218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.155430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.155458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.169010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.169280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.182901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.183085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.183112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.196564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.196794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.196838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.210344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.210521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.210548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.224174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.224399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.224427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.238133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.238338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.238366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.252028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.252307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.252337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.265977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.266195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.266238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.279960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.280247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.280276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.293867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.294089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.294133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.307675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.307879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.307908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:12.897 [2024-11-20 07:27:16.320637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:12.897 [2024-11-20 07:27:16.320840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.897 [2024-11-20 07:27:16.320867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.333882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.334041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.334068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.347877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.348097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.348127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.361717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.361921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.361950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.375179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.375396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.375425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.388909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.389084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.389112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.402106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.402325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.402353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.415522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.415763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.415793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.429541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.429801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.429846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.443589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.443839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.443883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.457579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.457808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.457852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.471432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.471583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.471610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.485446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.485650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.485693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.499401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.499608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.499653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.513387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.513720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.527362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.527566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.527615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.541205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.541467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.541495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.555092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.555279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.555329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 [2024-11-20 07:27:16.569141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188dd50) with pdu=0x200016eeea00 00:25:13.156 [2024-11-20 07:27:16.569373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.156 [2024-11-20 07:27:16.569401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:13.156 18733.00 IOPS, 73.18 MiB/s 00:25:13.156 Latency(us) 00:25:13.156 [2024-11-20T06:27:16.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.156 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:13.156 nvme0n1 : 2.01 18730.28 73.17 0.00 0.00 6817.86 2694.26 15049.01 00:25:13.156 [2024-11-20T06:27:16.589Z] =================================================================================================================== 00:25:13.156 [2024-11-20T06:27:16.590Z] Total : 18730.28 73.17 0.00 0.00 6817.86 2694.26 15049.01 00:25:13.157 { 00:25:13.157 "results": [ 00:25:13.157 { 00:25:13.157 "job": "nvme0n1", 00:25:13.157 "core_mask": "0x2", 00:25:13.157 "workload": "randwrite", 00:25:13.157 "status": "finished", 00:25:13.157 "queue_depth": 128, 00:25:13.157 "io_size": 4096, 00:25:13.157 "runtime": 2.006751, 00:25:13.157 "iops": 18730.275953518896, 00:25:13.157 "mibps": 73.16514044343319, 00:25:13.157 "io_failed": 0, 00:25:13.157 "io_timeout": 0, 00:25:13.157 "avg_latency_us": 6817.864162885317, 00:25:13.157 "min_latency_us": 2694.257777777778, 00:25:13.157 "max_latency_us": 15049.007407407407 00:25:13.157 } 00:25:13.157 ], 00:25:13.157 "core_count": 1 00:25:13.157 } 00:25:13.415 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:13.415 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:13.415 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:13.415 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:13.415 | .driver_specific 00:25:13.415 | .nvme_error 00:25:13.415 | .status_code 00:25:13.415 | .command_transient_transport_error' 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2607473 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2607473 ']' 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2607473 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2607473 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2607473' 00:25:13.673 killing process with pid 2607473 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2607473 00:25:13.673 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.673 00:25:13.673 Latency(us) 00:25:13.673 [2024-11-20T06:27:17.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.673 [2024-11-20T06:27:17.106Z] =================================================================================================================== 00:25:13.673 [2024-11-20T06:27:17.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.673 07:27:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2607473 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2607883 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2607883 /var/tmp/bperf.sock 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2607883 ']' 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:13.931 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.931 [2024-11-20 07:27:17.210573] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:13.931 [2024-11-20 07:27:17.210673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607883 ] 00:25:13.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.931 Zero copy mechanism will not be used. 00:25:13.931 [2024-11-20 07:27:17.284531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.931 [2024-11-20 07:27:17.345441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.189 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:14.189 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:14.189 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.189 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.447 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:14.447 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.447 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.447 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.447 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.447 07:27:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.012 nvme0n1 00:25:15.012 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:15.012 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.012 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.012 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.012 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.012 07:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.012 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.012 Zero copy mechanism will not be used. 00:25:15.012 Running I/O for 2 seconds... 00:25:15.013 [2024-11-20 07:27:18.382627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.382721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.388000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.388262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.388311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.392984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.393332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.393363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.397911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.398227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.398256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.402762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.403091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.403122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.407516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.407849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.407878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.412448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.412755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.412784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.417207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.417531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.417560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.422016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.422334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.422363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.426861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.427176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.427206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.431519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.431837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.431866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.436260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.436565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.436594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.013 [2024-11-20 07:27:18.440999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.013 [2024-11-20 07:27:18.441320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-11-20 07:27:18.441353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.445658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.445950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.450263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.450575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.450604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.454977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.455277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.455313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.459645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.459932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.459960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.464390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.464686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.469005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.469327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.469356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.473564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.473833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.473862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.478158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.478440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.482748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.483059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.272 [2024-11-20 07:27:18.483088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.272 [2024-11-20 07:27:18.487424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.272 [2024-11-20 07:27:18.487723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.487752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.492063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.492377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.492406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.496769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.497081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.497110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.501411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.501703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.501731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.505995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.506318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.510686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.510965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.510995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.515282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.515633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.515663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.520098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.520381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.520410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.524916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.525231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.525260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.530020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.530373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.530402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.536014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.536389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.536418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.542617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.542961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.542990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.548974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.549238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.549267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.554379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.554680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.554709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.559447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.559763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.559793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.564144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.564462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.564492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.569377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.569649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.569683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.573995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.574312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.574341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.578560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.578861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.578890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.583194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.583523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.583552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.587912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.588215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.588243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.592520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.592881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.592909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.597407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.597794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.597839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.602146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.602489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.602518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.606802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.607118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.607147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.611406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.611735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.616068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.616350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.273 [2024-11-20 07:27:18.616380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.273 [2024-11-20 07:27:18.620660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.273 [2024-11-20 07:27:18.620940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.620968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.625165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.625444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.625473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.629697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.629970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.629998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.634282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.634598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.634627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.638876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.639147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.639176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.643188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.643509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.643538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.647452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.647657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.647685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.651613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.651833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.651861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.655715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.655949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.655977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.660024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.660255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.660283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.664519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.664741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.664770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.669097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.669339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.669367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.673625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.673836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.673864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.678207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.678438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.678466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.682736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.682960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.682988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.687382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.687596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.687629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.691774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.691993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.692021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.696274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.696511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.696540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.274 [2024-11-20 07:27:18.700792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.274 [2024-11-20 07:27:18.701036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.274 [2024-11-20 07:27:18.701065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.705397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.705570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-11-20 07:27:18.705601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.709899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.710082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-11-20 07:27:18.710110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.714429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.714631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-11-20 07:27:18.714660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.718951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.719132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-11-20 07:27:18.719160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.723501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.723681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-11-20 07:27:18.723709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.728786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.729028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-11-20 07:27:18.729057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.533 [2024-11-20 07:27:18.733810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.533 [2024-11-20 07:27:18.734085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.734112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.739485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.739710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.739738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.744796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.745053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.745081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.749899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.750131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.750160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.755009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.755202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.755230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.760178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.760443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.760473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.765402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.765671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.765699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.770743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.770950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.770979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.775911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.776181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.776209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.781073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.781400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.781428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.786244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.786458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.786487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.791579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.791783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.791811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.796776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.796985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.797013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.802051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.802357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.802386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.807189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.807437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.807466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.812320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.812529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.812557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.817357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.817563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.817596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.822524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.822785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.822813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.827467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.827657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.827685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.832232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.832514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.832543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.837437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.837591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.837620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.843427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.843649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.843677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.848332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.848512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.848540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.852639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.852831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.852859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.856992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.857213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.857241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.862338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.862527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.862556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.866981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.867173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-11-20 07:27:18.867202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.534 [2024-11-20 07:27:18.871293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.534 [2024-11-20 07:27:18.871476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.871505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.875526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.875726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.875756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.880686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.880912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.880939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.885895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.886123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.886152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.892000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.892210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.892239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.897184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.897453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.897481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.902275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.902532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.902560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.907412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.907721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.907749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.912517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.912802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.912830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.917721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.918010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.918038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.922845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.923087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.923115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.928079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.928285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.928321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.933284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.933498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.933526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.938419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.938738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.938767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.943496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.943811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.943839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.948583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.948871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.948904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.953718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.953946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.953975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.535 [2024-11-20 07:27:18.958828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.535 [2024-11-20 07:27:18.959023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.535 [2024-11-20 07:27:18.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.963974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.964259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.964290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.969089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.969319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.969349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.974253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.974556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.974585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.979375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.979630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.979659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.984436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.984685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.984714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.989574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.989791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.989820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.994800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:18.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:18.995144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:18.999965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.000168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.000197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.005168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.005402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.005431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.010491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.010718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.010747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.015639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.015955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.015984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.020780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.021011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.025852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.026093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.026122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.030922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.031170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.031200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.036033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.036162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.794 [2024-11-20 07:27:19.036191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.794 [2024-11-20 07:27:19.041190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.794 [2024-11-20 07:27:19.041481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.041510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.046415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.046667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.046696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.051531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.051772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.051800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.056716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.056950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.056978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.061888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.062161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.062196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.067010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.067241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.067269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.072230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.072471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.072499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.077529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.077802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.082608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.082866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.082899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.087609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.087816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.087844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.092912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.093155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.093183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.098001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.098230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.098258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.103247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.103460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.103489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.108310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.108520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.108549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.113412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.113633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.113662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.118708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.118928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.118956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.123896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.124161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.124190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.129008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.129211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.129239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.134200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.134447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.134476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.139276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.139533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.139562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.144343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.144610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.144638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.149529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.149801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.149830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.154664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.154871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.154899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.159899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.160073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.160103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.165070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.165318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.165346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.170279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.170498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.170527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.175589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.175848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.175878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.795 [2024-11-20 07:27:19.180702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.795 [2024-11-20 07:27:19.180926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.795 [2024-11-20 07:27:19.180955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.185886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.186140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.186169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.190991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.191216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.191245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.196164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.196439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.196469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.201384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.201667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.201696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.206459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.206733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.206762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.211541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.211798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.211827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.216633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.216886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.216920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.796 [2024-11-20 07:27:19.221831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:15.796 [2024-11-20 07:27:19.222082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.796 [2024-11-20 07:27:19.222110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.227116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.227354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.227384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.232228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.232506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.232535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.237413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.237628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.237657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.242572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.242785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.242814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.247625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.247860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.247889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.252723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.252939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.252967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.257939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.258160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.258189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.263128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.263370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.054 [2024-11-20 07:27:19.263399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.054 [2024-11-20 07:27:19.268413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.054 [2024-11-20 07:27:19.268708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.273518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.273775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.273804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.278601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.278858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.278887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.283818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.284069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.284098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.288911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.289136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.289165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.294054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.294366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.294395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.299072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.299293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.299330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.304268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.304560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.304589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.309466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.309688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.314529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.314780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.314809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.319588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.319842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.319871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.324675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.324935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.324963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.329994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.330267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.330294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.335040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.335211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.340188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.340431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.340460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.345372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.345637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.345665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.350436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.350715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.350751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.355534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.355794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.355822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.360686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.360859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.360886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.365776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.365955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.365983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.370983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.371147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.371174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.376141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.376373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.376401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 6180.00 IOPS, 772.50 MiB/s [2024-11-20T06:27:19.488Z] [2024-11-20 07:27:19.382176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.382391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.382420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.386368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.386593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.386620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.390568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.390756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.390784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.394697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.394905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.394933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.398850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.399040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.055 [2024-11-20 07:27:19.399068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.055 [2024-11-20 07:27:19.403010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.055 [2024-11-20 07:27:19.403208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.403236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.407602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.407857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.407886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.412840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.413190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.418446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.418771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.418800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.424206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.424508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.424536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.429499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.429814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.429843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.434833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.435174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.435202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.440203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.440489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.440518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.445537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.445791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.445819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.450597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.450872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.450901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.455691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.455952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.455980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.460735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.460919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.460947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.465987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.466232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.466261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.471114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.471350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.471379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.476311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.476529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.056 [2024-11-20 07:27:19.481407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.056 [2024-11-20 07:27:19.481606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.056 [2024-11-20 07:27:19.481641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.486519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.486740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.486769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.491654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.491895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.491924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.496734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.496980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.497009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.501956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.502178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.502207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.506959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.507126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.507154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.512073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.512358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.512387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.517249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.517431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.517460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.522319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.522551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.522580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.527458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.527715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.532557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.532774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.532802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.537700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.537923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.537952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.542915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.543227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.543256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.548067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.548295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.548331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.553102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.553388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.553417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.558204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.558461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.558491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.563385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.563667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.563696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.568464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.568750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.568780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.573529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.573825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.573855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.578671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.316 [2024-11-20 07:27:19.578910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.316 [2024-11-20 07:27:19.578939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.316 [2024-11-20 07:27:19.583710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.583995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.584025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.588802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.589069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.589098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.593972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.594279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.598964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.599231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.599260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.604059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.604317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.604352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.609120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.609398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.609429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.614369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.614593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.614629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.619551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.619805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.619834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.624646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.624882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.624911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.629770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.629972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.630001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.634904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.635174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.635203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.639759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.639926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.639954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.645159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.645460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.645489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.650272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.650546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.650576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.655533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.655749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.655778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.660565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.660828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.660856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.665637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.665916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.665945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.670746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.670986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.671014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.675797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.676070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.676099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.680996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.681279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.681316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.686195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.686488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.686517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.691270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.691567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.691596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.696344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.696645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.696675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.701486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.701725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.701754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.706521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.706812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.706841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.711561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.711759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.317 [2024-11-20 07:27:19.711788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.317 [2024-11-20 07:27:19.716685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.317 [2024-11-20 07:27:19.716969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.318 [2024-11-20 07:27:19.716997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.318 [2024-11-20 07:27:19.721776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.318 [2024-11-20 07:27:19.722043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.318 [2024-11-20 07:27:19.722071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.318 [2024-11-20 07:27:19.726871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.318 [2024-11-20 07:27:19.727130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.318 [2024-11-20 07:27:19.727159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.318 [2024-11-20 07:27:19.731958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.318 [2024-11-20 07:27:19.732210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.318 [2024-11-20 07:27:19.732239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.318 [2024-11-20 07:27:19.737017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.318 [2024-11-20 07:27:19.737285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.318 [2024-11-20 07:27:19.737322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.318 [2024-11-20 07:27:19.742387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.318 [2024-11-20 07:27:19.742623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.318 [2024-11-20 07:27:19.742653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.577 [2024-11-20 07:27:19.747571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.577 [2024-11-20 07:27:19.747827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.577 [2024-11-20 07:27:19.747862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.752539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.752784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.752813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.757588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.757868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.757897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.762697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.762940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.762968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.767735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.768016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.768044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.772816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.773102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.773131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.777919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.778170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.778198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.782963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.783262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.783291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.788163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.788463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.788492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.793310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.793543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.793572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.798492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.798750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.798779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.803677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.803957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.803986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.808922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.809167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.809196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.813951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.814247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.814275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.818946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.819122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.819151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.824012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.824202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.824231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.829056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.829154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.829183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.834125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.834232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.834261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.839400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.839561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.839590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.844461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.844650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.844679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.849650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.849850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.849878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.854737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.854937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.854965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.859858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.860015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.860043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.864952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.865104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.865131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.870015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.870175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.870203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.875101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.875262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.875290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.880282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.880455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.578 [2024-11-20 07:27:19.880489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.578 [2024-11-20 07:27:19.885380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.578 [2024-11-20 07:27:19.885527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.885556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.890358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.890503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.890531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.895499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.895706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.895734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.900622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.900788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.900816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.905714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.905871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.905898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.910897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.911086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.911113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.915971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.916142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.916170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.921049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.921235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.921262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.926128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.926344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.931208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.931400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.931428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.936316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.936502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.941406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.941564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.941591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.946479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.946639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.946667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.951570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.951725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.951752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.956770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.956925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.961739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.961884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.961912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.966817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.966957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.966985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.971874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.972045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.972072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.977091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.977259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.977286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.982261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.982452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.982481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.987237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.987409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.987438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.992450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.992620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.992648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:19.997537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:19.997687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:19.997714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.579 [2024-11-20 07:27:20.003213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.579 [2024-11-20 07:27:20.003407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.579 [2024-11-20 07:27:20.003443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.008291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.008482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.008513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.013403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.013579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.013619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.018098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.018194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.018226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.022771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.022966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.022996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.027994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.028189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.028219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.033601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.033768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.033800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.038404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.038537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.038566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.042675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.042798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.042831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.047043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.047169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.047198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.051379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.051522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.051551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.055792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.055885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.055912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.060146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.060262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.060290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.065036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.065147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.065175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.070033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.070212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.070241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.075118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.075286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.075322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.081045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.081251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.081280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.086429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.086520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.086548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.090752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.090840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.090866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.095048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.095205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.095234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.100296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.100378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.839 [2024-11-20 07:27:20.100405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.839 [2024-11-20 07:27:20.104784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.839 [2024-11-20 07:27:20.104873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.104900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.108974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.109079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.109107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.113183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.113266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.113292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.117343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.117422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.117450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.121681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.121787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.121815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.126701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.126868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.126896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.131786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.131950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.131979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.137612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.137792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.137827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.142186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.142266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.142292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.146456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.146595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.146623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.150826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.150952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.150980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.155314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.155468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.155496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.159612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.159765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.159793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.163884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.163974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.164006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.168282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.168390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.168419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.172644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.172769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.172797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.177203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.177354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.177383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.182243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.182438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.182467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.187862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.188033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.188061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.193143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.193241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.193269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.197378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.197481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.201706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.201900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.201928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.206160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.206277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.206312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.210629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.210762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.210791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.215168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.215319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.215348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.219419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.219560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.219589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.223680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.223817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.840 [2024-11-20 07:27:20.223845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.840 [2024-11-20 07:27:20.227888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.840 [2024-11-20 07:27:20.228028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.228056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.232291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.232399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.232427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.236634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.236749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.236778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.240985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.241088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.241116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.245455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.245551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.245579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.249870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.249955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.249982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.254129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.254275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.254316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.258489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.258579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.258607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.262785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.262859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.262886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.841 [2024-11-20 07:27:20.267092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:16.841 [2024-11-20 07:27:20.267168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.841 [2024-11-20 07:27:20.267196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.099 [2024-11-20 07:27:20.271328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.099 [2024-11-20 07:27:20.271426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.099 [2024-11-20 07:27:20.271454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.099 [2024-11-20 07:27:20.276091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.099 [2024-11-20 07:27:20.276267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.099 [2024-11-20 07:27:20.276295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.099 [2024-11-20 07:27:20.281062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.099 [2024-11-20 07:27:20.281217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.099 [2024-11-20 07:27:20.281245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.099 [2024-11-20 07:27:20.286124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.099 [2024-11-20 07:27:20.286311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.099 [2024-11-20 07:27:20.286339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.099 [2024-11-20 07:27:20.291192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.099 [2024-11-20 07:27:20.291371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.099 [2024-11-20 07:27:20.291400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.099 [2024-11-20 07:27:20.296317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.099 [2024-11-20 07:27:20.296461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.099 [2024-11-20 07:27:20.296489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.302348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.302553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.302582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.307409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.307530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.307559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.311609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.311711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.311739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.315839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.315933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.315961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.321135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.321216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.321242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.325337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.325416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.325444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.329467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.329551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.329579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.333636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.333714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.333740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.337781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.337851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.337877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.341910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.342000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.342027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.346043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.346111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.346137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.350170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.350261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.350287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.354282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.354383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.354411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.358426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.358506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.358533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.362579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.362649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.362676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.366731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.366815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.366842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.370853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.370929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.370961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.374995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.375073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.100 [2024-11-20 07:27:20.379115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188e090) with pdu=0x200016eff3c8 00:25:17.100 [2024-11-20 07:27:20.379195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.100 [2024-11-20 07:27:20.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.100 6266.50 IOPS, 783.31 MiB/s 00:25:17.100 Latency(us) 00:25:17.100 [2024-11-20T06:27:20.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.100 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:17.100 nvme0n1 : 2.00 6265.00 783.12 0.00 0.00 2547.16 1953.94 10243.03 00:25:17.100 [2024-11-20T06:27:20.533Z] =================================================================================================================== 00:25:17.100 [2024-11-20T06:27:20.533Z] Total : 6265.00 783.12 0.00 0.00 2547.16 1953.94 10243.03 00:25:17.100 { 00:25:17.100 "results": [ 00:25:17.100 { 00:25:17.100 "job": "nvme0n1", 00:25:17.100 "core_mask": "0x2", 00:25:17.100 "workload": "randwrite", 00:25:17.100 "status": "finished", 00:25:17.100 "queue_depth": 16, 00:25:17.100 "io_size": 131072, 00:25:17.100 "runtime": 2.003033, 00:25:17.100 "iops": 6264.999128821143, 00:25:17.100 "mibps": 783.1248911026429, 00:25:17.100 "io_failed": 0, 00:25:17.100 "io_timeout": 0, 00:25:17.100 "avg_latency_us": 2547.155112374308, 00:25:17.100 "min_latency_us": 1953.9437037037037, 00:25:17.100 "max_latency_us": 10243.034074074074 00:25:17.100 } 00:25:17.100 ], 00:25:17.100 "core_count": 1 00:25:17.100 } 00:25:17.100 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.100 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.100 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.100 | .driver_specific 00:25:17.100 | .nvme_error 00:25:17.100 | .status_code 00:25:17.100 | .command_transient_transport_error' 00:25:17.100 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 405 > 0 )) 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2607883 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2607883 ']' 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2607883 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2607883 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2607883' 00:25:17.358 killing process with pid 2607883 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2607883 00:25:17.358 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.358 00:25:17.358 Latency(us) 00:25:17.358 [2024-11-20T06:27:20.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.358 [2024-11-20T06:27:20.791Z] =================================================================================================================== 00:25:17.358 [2024-11-20T06:27:20.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.358 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2607883 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2606006 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2606006 ']' 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2606006 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2606006 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2606006' 00:25:17.616 killing process with pid 2606006 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2606006 00:25:17.616 07:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2606006 00:25:17.875 00:25:17.875 real 0m15.455s 00:25:17.875 user 0m30.628s 00:25:17.875 sys 0m4.497s 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.875 ************************************ 00:25:17.875 END TEST nvmf_digest_error 00:25:17.875 ************************************ 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.875 rmmod nvme_tcp 00:25:17.875 rmmod nvme_fabrics 00:25:17.875 rmmod nvme_keyring 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2606006 ']' 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2606006 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 2606006 ']' 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 2606006 00:25:17.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2606006) - No such process 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 2606006 is not found' 00:25:17.875 Process with pid 2606006 is not found 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:17.875 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.134 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.134 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.134 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.134 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.134 07:27:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.039 00:25:20.039 real 0m35.995s 00:25:20.039 user 1m3.555s 00:25:20.039 sys 0m10.494s 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:20.039 ************************************ 00:25:20.039 END TEST nvmf_digest 00:25:20.039 ************************************ 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.039 ************************************ 00:25:20.039 START TEST nvmf_bdevperf 00:25:20.039 ************************************ 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:20.039 * Looking for test storage... 00:25:20.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:20.039 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:20.299 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:20.299 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:20.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.300 --rc genhtml_branch_coverage=1 00:25:20.300 --rc genhtml_function_coverage=1 00:25:20.300 --rc genhtml_legend=1 00:25:20.300 --rc geninfo_all_blocks=1 00:25:20.300 --rc geninfo_unexecuted_blocks=1 00:25:20.300 00:25:20.300 ' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:20.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.300 --rc genhtml_branch_coverage=1 00:25:20.300 --rc genhtml_function_coverage=1 00:25:20.300 --rc genhtml_legend=1 00:25:20.300 --rc geninfo_all_blocks=1 00:25:20.300 --rc geninfo_unexecuted_blocks=1 00:25:20.300 00:25:20.300 ' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:20.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.300 --rc genhtml_branch_coverage=1 00:25:20.300 --rc genhtml_function_coverage=1 00:25:20.300 --rc genhtml_legend=1 00:25:20.300 --rc geninfo_all_blocks=1 00:25:20.300 --rc geninfo_unexecuted_blocks=1 00:25:20.300 00:25:20.300 ' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:20.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.300 --rc genhtml_branch_coverage=1 00:25:20.300 --rc genhtml_function_coverage=1 00:25:20.300 --rc genhtml_legend=1 00:25:20.300 --rc geninfo_all_blocks=1 00:25:20.300 --rc geninfo_unexecuted_blocks=1 00:25:20.300 00:25:20.300 ' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.300 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.301 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.301 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.301 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.301 07:27:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.864 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:22.865 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:22.865 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:22.865 Found net devices under 0000:09:00.0: cvl_0_0 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:22.865 Found net devices under 0000:09:00.1: cvl_0_1 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.865 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:25:22.866 00:25:22.866 --- 10.0.0.2 ping statistics --- 00:25:22.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.866 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:25:22.866 00:25:22.866 --- 10.0.0.1 ping statistics --- 00:25:22.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.866 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2610255 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2610255 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2610255 ']' 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:22.866 07:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 [2024-11-20 07:27:25.885250] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:22.866 [2024-11-20 07:27:25.885359] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.866 [2024-11-20 07:27:25.961980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:22.866 [2024-11-20 07:27:26.024961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.866 [2024-11-20 07:27:26.025009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.866 [2024-11-20 07:27:26.025022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.866 [2024-11-20 07:27:26.025034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.866 [2024-11-20 07:27:26.025044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.866 [2024-11-20 07:27:26.026553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.866 [2024-11-20 07:27:26.030322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.866 [2024-11-20 07:27:26.030333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 [2024-11-20 07:27:26.192894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 Malloc0 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.866 [2024-11-20 07:27:26.258340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.866 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:22.867 { 00:25:22.867 "params": { 00:25:22.867 "name": "Nvme$subsystem", 00:25:22.867 "trtype": "$TEST_TRANSPORT", 00:25:22.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:22.867 "adrfam": "ipv4", 00:25:22.867 "trsvcid": "$NVMF_PORT", 00:25:22.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:22.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:22.867 "hdgst": ${hdgst:-false}, 00:25:22.867 "ddgst": ${ddgst:-false} 00:25:22.867 }, 00:25:22.867 "method": "bdev_nvme_attach_controller" 00:25:22.867 } 00:25:22.867 EOF 00:25:22.867 )") 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:22.867 07:27:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:22.867 "params": { 00:25:22.867 "name": "Nvme1", 00:25:22.867 "trtype": "tcp", 00:25:22.867 "traddr": "10.0.0.2", 00:25:22.867 "adrfam": "ipv4", 00:25:22.867 "trsvcid": "4420", 00:25:22.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:22.867 "hdgst": false, 00:25:22.867 "ddgst": false 00:25:22.867 }, 00:25:22.867 "method": "bdev_nvme_attach_controller" 00:25:22.867 }' 00:25:23.125 [2024-11-20 07:27:26.311343] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:23.125 [2024-11-20 07:27:26.311430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610390 ] 00:25:23.125 [2024-11-20 07:27:26.379372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.125 [2024-11-20 07:27:26.441761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.383 Running I/O for 1 seconds... 00:25:24.316 8425.00 IOPS, 32.91 MiB/s 00:25:24.316 Latency(us) 00:25:24.316 [2024-11-20T06:27:27.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:24.316 Verification LBA range: start 0x0 length 0x4000 00:25:24.316 Nvme1n1 : 1.01 8501.12 33.21 0.00 0.00 14978.68 2621.44 12621.75 00:25:24.316 [2024-11-20T06:27:27.749Z] =================================================================================================================== 00:25:24.316 [2024-11-20T06:27:27.749Z] Total : 8501.12 33.21 0.00 0.00 14978.68 2621.44 12621.75 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2610539 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.574 { 00:25:24.574 "params": { 00:25:24.574 "name": "Nvme$subsystem", 00:25:24.574 "trtype": "$TEST_TRANSPORT", 00:25:24.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.574 "adrfam": "ipv4", 00:25:24.574 "trsvcid": "$NVMF_PORT", 00:25:24.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.574 "hdgst": ${hdgst:-false}, 00:25:24.574 "ddgst": ${ddgst:-false} 00:25:24.574 }, 00:25:24.574 "method": "bdev_nvme_attach_controller" 00:25:24.574 } 00:25:24.574 EOF 00:25:24.574 )") 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:24.574 07:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:24.574 "params": { 00:25:24.574 "name": "Nvme1", 00:25:24.574 "trtype": "tcp", 00:25:24.574 "traddr": "10.0.0.2", 00:25:24.574 "adrfam": "ipv4", 00:25:24.574 "trsvcid": "4420", 00:25:24.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.574 "hdgst": false, 00:25:24.574 "ddgst": false 00:25:24.574 }, 00:25:24.574 "method": "bdev_nvme_attach_controller" 00:25:24.574 }' 00:25:24.574 [2024-11-20 07:27:27.901714] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:24.574 [2024-11-20 07:27:27.901798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610539 ] 00:25:24.574 [2024-11-20 07:27:27.969406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.833 [2024-11-20 07:27:28.028254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.833 Running I/O for 15 seconds... 00:25:27.140 8622.00 IOPS, 33.68 MiB/s [2024-11-20T06:27:31.144Z] 8508.00 IOPS, 33.23 MiB/s [2024-11-20T06:27:31.144Z] 07:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2610255 00:25:27.711 07:27:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:27.711 [2024-11-20 07:27:30.871885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.871948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.871993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.711 [2024-11-20 07:27:30.872450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.711 [2024-11-20 07:27:30.872467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-11-20 07:27:30.872483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.712 [2024-11-20 07:27:30.872518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.872985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.872999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.712 [2024-11-20 07:27:30.873550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.712 [2024-11-20 07:27:30.873565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.713 [2024-11-20 07:27:30.873579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.713 [2024-11-20 07:27:30.873622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.713 [2024-11-20 07:27:30.873647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.713 [2024-11-20 07:27:30.873687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.713 [2024-11-20 07:27:30.873712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.873977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.873989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.713 [2024-11-20 07:27:30.874494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.713 [2024-11-20 07:27:30.874509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.874980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.874994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.714 [2024-11-20 07:27:30.875515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.714 [2024-11-20 07:27:30.875529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-11-20 07:27:30.875568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-11-20 07:27:30.875596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-11-20 07:27:30.875643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-11-20 07:27:30.875685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-11-20 07:27:30.875715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.715 [2024-11-20 07:27:30.875740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9628d0 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.875769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.715 [2024-11-20 07:27:30.875778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.715 [2024-11-20 07:27:30.875788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46992 len:8 PRP1 0x0 PRP2 0x0 00:25:27.715 [2024-11-20 07:27:30.875800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.715 [2024-11-20 07:27:30.875952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.875967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.715 [2024-11-20 07:27:30.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.876007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.715 [2024-11-20 07:27:30.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.876041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.715 [2024-11-20 07:27:30.876055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.715 [2024-11-20 07:27:30.876067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.879118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.715 [2024-11-20 07:27:30.879151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.879971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.715 [2024-11-20 07:27:30.880019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.715 [2024-11-20 07:27:30.880034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.880264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.880506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.715 [2024-11-20 07:27:30.880529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.715 [2024-11-20 07:27:30.880545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.715 [2024-11-20 07:27:30.880561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.715 [2024-11-20 07:27:30.892849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.715 [2024-11-20 07:27:30.893266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.715 [2024-11-20 07:27:30.893320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.715 [2024-11-20 07:27:30.893340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.893600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.893805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.715 [2024-11-20 07:27:30.893826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.715 [2024-11-20 07:27:30.893839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.715 [2024-11-20 07:27:30.893850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.715 [2024-11-20 07:27:30.906076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.715 [2024-11-20 07:27:30.906498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.715 [2024-11-20 07:27:30.906529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.715 [2024-11-20 07:27:30.906545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.906784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.906975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.715 [2024-11-20 07:27:30.906995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.715 [2024-11-20 07:27:30.907007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.715 [2024-11-20 07:27:30.907019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.715 [2024-11-20 07:27:30.919308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.715 [2024-11-20 07:27:30.919655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.715 [2024-11-20 07:27:30.919683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.715 [2024-11-20 07:27:30.919698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.919914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.920119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.715 [2024-11-20 07:27:30.920138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.715 [2024-11-20 07:27:30.920150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.715 [2024-11-20 07:27:30.920162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.715 [2024-11-20 07:27:30.932283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.715 [2024-11-20 07:27:30.932612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.715 [2024-11-20 07:27:30.932640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.715 [2024-11-20 07:27:30.932662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.932880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.933084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.715 [2024-11-20 07:27:30.933104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.715 [2024-11-20 07:27:30.933116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.715 [2024-11-20 07:27:30.933128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.715 [2024-11-20 07:27:30.945540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.715 [2024-11-20 07:27:30.945934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.715 [2024-11-20 07:27:30.945963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.715 [2024-11-20 07:27:30.945978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.715 [2024-11-20 07:27:30.946195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.715 [2024-11-20 07:27:30.946431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.715 [2024-11-20 07:27:30.946452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.715 [2024-11-20 07:27:30.946465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.715 [2024-11-20 07:27:30.946477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.715 [2024-11-20 07:27:30.958598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:30.958974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:30.959002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:30.959017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:30.959227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:30.959463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:30.959484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:30.959497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:30.959509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:30.971744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:30.972149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:30.972177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:30.972193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:30.972453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:30.972678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:30.972699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:30.972712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:30.972725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:30.984879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:30.985182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:30.985209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:30.985225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:30.985465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:30.985677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:30.985698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:30.985711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:30.985722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:30.997971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:30.998321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:30.998350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:30.998367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:30.998601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:30.998804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:30.998823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:30.998835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:30.998848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:31.010996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:31.011382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:31.011411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:31.011427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:31.011643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:31.011847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:31.011865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:31.011883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:31.011895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:31.024017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:31.024427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:31.024456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:31.024472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:31.024707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:31.024912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:31.024932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:31.024944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:31.024956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:31.037162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:31.037583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:31.037611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:31.037626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:31.037860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:31.038084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:31.038105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:31.038117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.716 [2024-11-20 07:27:31.038129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.716 [2024-11-20 07:27:31.050335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.716 [2024-11-20 07:27:31.050697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.716 [2024-11-20 07:27:31.050724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.716 [2024-11-20 07:27:31.050739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.716 [2024-11-20 07:27:31.050969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.716 [2024-11-20 07:27:31.051174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.716 [2024-11-20 07:27:31.051195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.716 [2024-11-20 07:27:31.051208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.051221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.717 [2024-11-20 07:27:31.063446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.717 [2024-11-20 07:27:31.063778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.717 [2024-11-20 07:27:31.063806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.717 [2024-11-20 07:27:31.063821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.717 [2024-11-20 07:27:31.064043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.717 [2024-11-20 07:27:31.064249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.717 [2024-11-20 07:27:31.064269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.717 [2024-11-20 07:27:31.064282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.064323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.717 [2024-11-20 07:27:31.076459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.717 [2024-11-20 07:27:31.076819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.717 [2024-11-20 07:27:31.076847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.717 [2024-11-20 07:27:31.076863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.717 [2024-11-20 07:27:31.077094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.717 [2024-11-20 07:27:31.077328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.717 [2024-11-20 07:27:31.077365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.717 [2024-11-20 07:27:31.077379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.077392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.717 [2024-11-20 07:27:31.089625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.717 [2024-11-20 07:27:31.090035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.717 [2024-11-20 07:27:31.090065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.717 [2024-11-20 07:27:31.090081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.717 [2024-11-20 07:27:31.090331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.717 [2024-11-20 07:27:31.090533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.717 [2024-11-20 07:27:31.090554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.717 [2024-11-20 07:27:31.090567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.090580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.717 [2024-11-20 07:27:31.102848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.717 [2024-11-20 07:27:31.103240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.717 [2024-11-20 07:27:31.103268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.717 [2024-11-20 07:27:31.103289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.717 [2024-11-20 07:27:31.103532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.717 [2024-11-20 07:27:31.103738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.717 [2024-11-20 07:27:31.103759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.717 [2024-11-20 07:27:31.103772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.103784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.717 [2024-11-20 07:27:31.115949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.717 [2024-11-20 07:27:31.116359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.717 [2024-11-20 07:27:31.116389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.717 [2024-11-20 07:27:31.116408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.717 [2024-11-20 07:27:31.116644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.717 [2024-11-20 07:27:31.116849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.717 [2024-11-20 07:27:31.116870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.717 [2024-11-20 07:27:31.116883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.116895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.717 [2024-11-20 07:27:31.129023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.717 [2024-11-20 07:27:31.129392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.717 [2024-11-20 07:27:31.129423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.717 [2024-11-20 07:27:31.129441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.717 [2024-11-20 07:27:31.129687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.717 [2024-11-20 07:27:31.129917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.717 [2024-11-20 07:27:31.129938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.717 [2024-11-20 07:27:31.129950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.717 [2024-11-20 07:27:31.129963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.977 [2024-11-20 07:27:31.142808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.977 [2024-11-20 07:27:31.143218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.977 [2024-11-20 07:27:31.143247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.977 [2024-11-20 07:27:31.143263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.977 [2024-11-20 07:27:31.143512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.977 [2024-11-20 07:27:31.143743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.977 [2024-11-20 07:27:31.143765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.977 [2024-11-20 07:27:31.143779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.977 [2024-11-20 07:27:31.143792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.977 [2024-11-20 07:27:31.156139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.977 [2024-11-20 07:27:31.156519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.977 [2024-11-20 07:27:31.156566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.977 [2024-11-20 07:27:31.156582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.977 [2024-11-20 07:27:31.156822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.977 [2024-11-20 07:27:31.157027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.977 [2024-11-20 07:27:31.157047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.977 [2024-11-20 07:27:31.157059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.977 [2024-11-20 07:27:31.157072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.977 [2024-11-20 07:27:31.169299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.977 [2024-11-20 07:27:31.169644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.977 [2024-11-20 07:27:31.169673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.977 [2024-11-20 07:27:31.169689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.977 [2024-11-20 07:27:31.169913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.977 [2024-11-20 07:27:31.170117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.977 [2024-11-20 07:27:31.170137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.977 [2024-11-20 07:27:31.170149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.977 [2024-11-20 07:27:31.170161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.977 [2024-11-20 07:27:31.182410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.977 [2024-11-20 07:27:31.182796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.977 [2024-11-20 07:27:31.182824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.977 [2024-11-20 07:27:31.182839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.977 [2024-11-20 07:27:31.183071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.977 [2024-11-20 07:27:31.183276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.977 [2024-11-20 07:27:31.183319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.977 [2024-11-20 07:27:31.183339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.977 [2024-11-20 07:27:31.183352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.977 [2024-11-20 07:27:31.195546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.977 [2024-11-20 07:27:31.195875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.977 [2024-11-20 07:27:31.195904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.977 [2024-11-20 07:27:31.195920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.977 [2024-11-20 07:27:31.196138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.977 [2024-11-20 07:27:31.196370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.977 [2024-11-20 07:27:31.196391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.977 [2024-11-20 07:27:31.196404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.977 [2024-11-20 07:27:31.196417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.977 [2024-11-20 07:27:31.208722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.977 [2024-11-20 07:27:31.209528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.977 [2024-11-20 07:27:31.209557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.977 [2024-11-20 07:27:31.209572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.977 [2024-11-20 07:27:31.209774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.977 [2024-11-20 07:27:31.209971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.977 [2024-11-20 07:27:31.209991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.210003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.210015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.221912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.222274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.222313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.222346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.222582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.222798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.222819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.222832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.222843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.235282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.235686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.235743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.235760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.236013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.236217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.236236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.236248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.236260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.248496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.248898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.248936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.248951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.249161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.249393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.249413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.249426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.249438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.261693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.262049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.262141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.262156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.262396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.262592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.262625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.262637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.262649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 7390.33 IOPS, 28.87 MiB/s [2024-11-20T06:27:31.411Z] [2024-11-20 07:27:31.274974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.275294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.275336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.275383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.275606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.275818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.275838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.275851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.275863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.288231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.288565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.288594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.288609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.288811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.289020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.289040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.289053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.289065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.301439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.301806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.301833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.301849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.302082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.302287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.302338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.302354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.302366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.314714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.315144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.315187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.315203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.315462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.315675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.315694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.315706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.315717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.327936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.328357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.328388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.328404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.328651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.978 [2024-11-20 07:27:31.328854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.978 [2024-11-20 07:27:31.328873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.978 [2024-11-20 07:27:31.328886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.978 [2024-11-20 07:27:31.328898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.978 [2024-11-20 07:27:31.341058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.978 [2024-11-20 07:27:31.341368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.978 [2024-11-20 07:27:31.341396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.978 [2024-11-20 07:27:31.341411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.978 [2024-11-20 07:27:31.341628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.979 [2024-11-20 07:27:31.341834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.979 [2024-11-20 07:27:31.341853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.979 [2024-11-20 07:27:31.341865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.979 [2024-11-20 07:27:31.341877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.979 [2024-11-20 07:27:31.354222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.979 [2024-11-20 07:27:31.354622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.979 [2024-11-20 07:27:31.354656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.979 [2024-11-20 07:27:31.354686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.979 [2024-11-20 07:27:31.354937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.979 [2024-11-20 07:27:31.355126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.979 [2024-11-20 07:27:31.355145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.979 [2024-11-20 07:27:31.355162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.979 [2024-11-20 07:27:31.355174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.979 [2024-11-20 07:27:31.367339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.979 [2024-11-20 07:27:31.367701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.979 [2024-11-20 07:27:31.367727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.979 [2024-11-20 07:27:31.367742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.979 [2024-11-20 07:27:31.367938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.979 [2024-11-20 07:27:31.368159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.979 [2024-11-20 07:27:31.368178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.979 [2024-11-20 07:27:31.368191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.979 [2024-11-20 07:27:31.368202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.979 [2024-11-20 07:27:31.380558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.979 [2024-11-20 07:27:31.381018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.979 [2024-11-20 07:27:31.381050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.979 [2024-11-20 07:27:31.381066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.979 [2024-11-20 07:27:31.381317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.979 [2024-11-20 07:27:31.381549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.979 [2024-11-20 07:27:31.381570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.979 [2024-11-20 07:27:31.381583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.979 [2024-11-20 07:27:31.381596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:27.979 [2024-11-20 07:27:31.393874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:27.979 [2024-11-20 07:27:31.394220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.979 [2024-11-20 07:27:31.394248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:27.979 [2024-11-20 07:27:31.394264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:27.979 [2024-11-20 07:27:31.394529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:27.979 [2024-11-20 07:27:31.394736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:27.979 [2024-11-20 07:27:31.394756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:27.979 [2024-11-20 07:27:31.394768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:27.979 [2024-11-20 07:27:31.394779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.407714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.408082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.408125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.408140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.408380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.408575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.408594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.408621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.408644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.420744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.421158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.421186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.421205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.421471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.421680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.421699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.421712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.421724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.433777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.434121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.434150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.434165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.434405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.434610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.434629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.434646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.434658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.446807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.447225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.447253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.447279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.447558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.447765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.447784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.447796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.447808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.459986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.460339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.460368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.460385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.460625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.460830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.460850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.460862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.460874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.472979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.473366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.473394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.473408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.473625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.473830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.473849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.473861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.473873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.486187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.486587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.486624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.486640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.486861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.487077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.487096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.487108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.487119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.499325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.499643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.499686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.499701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.499936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.500147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.500166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.500178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.500190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.512755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.513136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.513174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.513189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.239 [2024-11-20 07:27:31.513427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.239 [2024-11-20 07:27:31.513678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.239 [2024-11-20 07:27:31.513698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.239 [2024-11-20 07:27:31.513710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.239 [2024-11-20 07:27:31.513722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.239 [2024-11-20 07:27:31.526100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.239 [2024-11-20 07:27:31.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.239 [2024-11-20 07:27:31.526486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.239 [2024-11-20 07:27:31.526502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.526751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.526946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.526965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.526998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.527011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.539817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.540172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.540200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.540216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.540440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.540671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.540692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.540705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.540717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.553264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.553623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.553652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.553669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.553898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.554137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.554158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.554171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.554199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.566566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.566947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.566987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.567003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.567237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.567485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.567509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.567523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.567536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.579812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.580161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.580189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.580205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.580476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.580712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.580732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.580745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.580757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.592972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.593385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.593414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.593430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.593683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.593887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.593906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.593918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.593930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.606173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.606642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.606686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.606702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.606954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.607155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.607174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.607187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.607200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.619937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.620342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.620371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.620392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.620608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.620829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.620849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.620862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.620874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.633319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.633796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.633845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.633861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.634117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.634386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.240 [2024-11-20 07:27:31.634408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.240 [2024-11-20 07:27:31.634422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.240 [2024-11-20 07:27:31.634436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.240 [2024-11-20 07:27:31.646778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.240 [2024-11-20 07:27:31.647176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.240 [2024-11-20 07:27:31.647232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.240 [2024-11-20 07:27:31.647248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.240 [2024-11-20 07:27:31.647506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.240 [2024-11-20 07:27:31.647735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.241 [2024-11-20 07:27:31.647755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.241 [2024-11-20 07:27:31.647767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.241 [2024-11-20 07:27:31.647778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.241 [2024-11-20 07:27:31.660000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.241 [2024-11-20 07:27:31.660366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.241 [2024-11-20 07:27:31.660405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.241 [2024-11-20 07:27:31.660420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.241 [2024-11-20 07:27:31.660641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.241 [2024-11-20 07:27:31.660855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.241 [2024-11-20 07:27:31.660879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.241 [2024-11-20 07:27:31.660892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.241 [2024-11-20 07:27:31.660904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.500 [2024-11-20 07:27:31.673259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.500 [2024-11-20 07:27:31.673665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.500 [2024-11-20 07:27:31.673718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.500 [2024-11-20 07:27:31.673733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.500 [2024-11-20 07:27:31.673980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.500 [2024-11-20 07:27:31.674222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.500 [2024-11-20 07:27:31.674244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.500 [2024-11-20 07:27:31.674274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.500 [2024-11-20 07:27:31.674288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.500 [2024-11-20 07:27:31.686521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.500 [2024-11-20 07:27:31.686847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.500 [2024-11-20 07:27:31.686875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.500 [2024-11-20 07:27:31.686890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.500 [2024-11-20 07:27:31.687107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.500 [2024-11-20 07:27:31.687340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.500 [2024-11-20 07:27:31.687377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.500 [2024-11-20 07:27:31.687391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.500 [2024-11-20 07:27:31.687403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.500 [2024-11-20 07:27:31.699704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.500 [2024-11-20 07:27:31.700030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.500 [2024-11-20 07:27:31.700058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.500 [2024-11-20 07:27:31.700073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.700290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.700512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.700532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.700545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.700561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.712840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.713244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.713272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.713287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.713532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.713738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.713758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.713770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.713782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.725912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.726254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.726281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.726296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.726564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.726769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.726788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.726800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.726812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.739019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.739423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.739453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.739469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.739710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.739915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.739934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.739946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.739958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.752033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.752334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.752361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.752376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.752587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.752792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.752811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.752823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.752834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.765224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.765597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.765626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.765646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.765884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.766090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.766109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.766121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.766132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.778257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.778609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.778637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.778652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.778888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.779092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.779111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.779123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.779135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.791485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.791815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.791843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.791858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.792080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.792300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.792335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.792348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.792360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.804614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.805081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.805109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.805374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.805576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.805608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.805621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.805634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.817881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.818224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.501 [2024-11-20 07:27:31.818251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.501 [2024-11-20 07:27:31.818267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.501 [2024-11-20 07:27:31.818531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.501 [2024-11-20 07:27:31.818740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.501 [2024-11-20 07:27:31.818759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.501 [2024-11-20 07:27:31.818772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.501 [2024-11-20 07:27:31.818783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.501 [2024-11-20 07:27:31.831108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.501 [2024-11-20 07:27:31.831463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.831516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.831766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.831971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.831995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.832008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.832020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.844339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.844659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.844685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.844700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.844915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.845120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.845139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.845151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.845163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.857345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.857702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.857728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.857743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.857938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.858160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.858179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.858191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.858203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.870373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.870730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.870781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.870796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.871013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.871217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.871236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.871248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.871264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.883633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.884018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.884046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.884063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.884315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.884552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.884574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.884588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.884601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.896841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.897256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.897298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.897508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.897753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.897942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.897961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.897974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.897985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.910045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.910418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.910456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.910472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.910693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.910897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.910916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.910928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.910940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.502 [2024-11-20 07:27:31.923176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.502 [2024-11-20 07:27:31.923604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.502 [2024-11-20 07:27:31.923632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.502 [2024-11-20 07:27:31.923648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.502 [2024-11-20 07:27:31.923884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.502 [2024-11-20 07:27:31.924088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.502 [2024-11-20 07:27:31.924107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.502 [2024-11-20 07:27:31.924120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.502 [2024-11-20 07:27:31.924132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.762 [2024-11-20 07:27:31.936690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.762 [2024-11-20 07:27:31.937113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.762 [2024-11-20 07:27:31.937141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.762 [2024-11-20 07:27:31.937157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.762 [2024-11-20 07:27:31.937404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.762 [2024-11-20 07:27:31.937629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.762 [2024-11-20 07:27:31.937659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.762 [2024-11-20 07:27:31.937671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.762 [2024-11-20 07:27:31.937682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.762 [2024-11-20 07:27:31.949923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.762 [2024-11-20 07:27:31.950265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.762 [2024-11-20 07:27:31.950322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.762 [2024-11-20 07:27:31.950340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.762 [2024-11-20 07:27:31.950576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.762 [2024-11-20 07:27:31.950780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.762 [2024-11-20 07:27:31.950800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.762 [2024-11-20 07:27:31.950812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:31.950823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:31.962978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:31.963374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:31.963402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:31.963418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:31.963678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:31.963867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:31.963886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:31.963898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:31.963909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:31.976113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:31.976544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:31.976573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:31.976604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:31.976839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:31.977042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:31.977061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:31.977074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:31.977086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:31.989268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:31.989643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:31.989673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:31.989689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:31.989927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:31.990132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:31.990153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:31.990166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:31.990178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.002352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.002761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.002790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.002806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.003037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.003243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.003268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:32.003281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:32.003293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.015509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.015944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.015974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.015990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.016227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.016466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.016488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:32.016501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:32.016514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.028676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.029025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.029053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.029069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.029316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.029512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.029532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:32.029546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:32.029558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.041671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.042016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.042044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.042060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.042293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.042508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.042529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:32.042541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:32.042559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.055005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.055350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.055380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.055396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.055627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.055831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.055852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:32.055865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:32.055877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.068208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.068585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.068616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.068648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.068882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.069085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.069105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.763 [2024-11-20 07:27:32.069118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.763 [2024-11-20 07:27:32.069130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.763 [2024-11-20 07:27:32.081323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.763 [2024-11-20 07:27:32.081673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.763 [2024-11-20 07:27:32.081702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.763 [2024-11-20 07:27:32.081718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.763 [2024-11-20 07:27:32.081954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.763 [2024-11-20 07:27:32.082158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.763 [2024-11-20 07:27:32.082179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.082192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.082203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.094516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.094861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.094894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.094910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.095143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.095360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.095380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.095393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.095405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.107698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.108049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.108078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.108094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.108341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.108548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.108570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.108584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.108597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.120750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.121165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.121194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.121210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.121459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.121682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.121703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.121715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.121727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.133897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.134260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.134289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.134317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.134558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.134821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.134842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.134856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.134868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.147222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.147579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.147610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.147626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.147866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.148078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.148099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.148112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.148123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.160452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.160821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.160850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.160865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.161101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.161336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.161359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.161373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.161386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.173479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.173774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.173816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.173832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.174049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.174253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.174277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.174290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.174327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.764 [2024-11-20 07:27:32.186740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.764 [2024-11-20 07:27:32.187145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.764 [2024-11-20 07:27:32.187174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:28.764 [2024-11-20 07:27:32.187189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:28.764 [2024-11-20 07:27:32.187435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:28.764 [2024-11-20 07:27:32.187682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.764 [2024-11-20 07:27:32.187705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.764 [2024-11-20 07:27:32.187718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.764 [2024-11-20 07:27:32.187746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.024 [2024-11-20 07:27:32.200361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.024 [2024-11-20 07:27:32.200724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.024 [2024-11-20 07:27:32.200752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.024 [2024-11-20 07:27:32.200767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.024 [2024-11-20 07:27:32.200984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.024 [2024-11-20 07:27:32.201190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.024 [2024-11-20 07:27:32.201211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.024 [2024-11-20 07:27:32.201223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.024 [2024-11-20 07:27:32.201235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.024 [2024-11-20 07:27:32.213407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.024 [2024-11-20 07:27:32.213719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.024 [2024-11-20 07:27:32.213747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.024 [2024-11-20 07:27:32.213762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.024 [2024-11-20 07:27:32.213979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.024 [2024-11-20 07:27:32.214184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.024 [2024-11-20 07:27:32.214204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.024 [2024-11-20 07:27:32.214217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.024 [2024-11-20 07:27:32.214230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.024 [2024-11-20 07:27:32.226624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.024 [2024-11-20 07:27:32.227043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.024 [2024-11-20 07:27:32.227070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.024 [2024-11-20 07:27:32.227086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.024 [2024-11-20 07:27:32.227327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.024 [2024-11-20 07:27:32.227523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.024 [2024-11-20 07:27:32.227544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.024 [2024-11-20 07:27:32.227557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.024 [2024-11-20 07:27:32.227571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.024 [2024-11-20 07:27:32.239949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.024 [2024-11-20 07:27:32.240361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.024 [2024-11-20 07:27:32.240391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.024 [2024-11-20 07:27:32.240408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.024 [2024-11-20 07:27:32.240652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.240842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.240862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.240875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.240887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.253254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.253656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.253700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.253716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.253948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.254137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.254157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.254170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.254182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.266354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.266697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.266730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.266746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.266944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.267164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.267184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.267197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.267209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 5542.75 IOPS, 21.65 MiB/s [2024-11-20T06:27:32.458Z] [2024-11-20 07:27:32.279426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.279739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.279767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.279783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.280000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.280205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.280226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.280239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.280251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.292446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.292818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.292846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.292861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.293080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.293298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.293331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.293358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.293373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.305566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.305948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.305977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.305993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.306234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.306472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.306495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.306508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.306521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.318822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.319231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.319259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.319275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.319542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.319765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.319785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.319797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.319809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.331908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.332317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.332345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.332361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.332598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.332804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.332823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.332835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.332848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.345089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.345433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.345462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.345478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.345707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.345913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.345933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.345950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.345964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.358371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.358737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.358765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.358779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.025 [2024-11-20 07:27:32.358976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.025 [2024-11-20 07:27:32.359185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.025 [2024-11-20 07:27:32.359205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.025 [2024-11-20 07:27:32.359217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.025 [2024-11-20 07:27:32.359230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.025 [2024-11-20 07:27:32.371663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.025 [2024-11-20 07:27:32.371987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.025 [2024-11-20 07:27:32.372015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.025 [2024-11-20 07:27:32.372031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.372263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.372513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.372534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.372547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.372559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.026 [2024-11-20 07:27:32.384829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.026 [2024-11-20 07:27:32.385276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.026 [2024-11-20 07:27:32.385313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.026 [2024-11-20 07:27:32.385331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.385575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.385792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.385812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.385826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.385838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.026 [2024-11-20 07:27:32.398089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.026 [2024-11-20 07:27:32.398419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.026 [2024-11-20 07:27:32.398448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.026 [2024-11-20 07:27:32.398464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.398699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.398905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.398925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.398937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.398949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.026 [2024-11-20 07:27:32.411300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.026 [2024-11-20 07:27:32.411660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.026 [2024-11-20 07:27:32.411688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.026 [2024-11-20 07:27:32.411704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.411941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.412145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.412165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.412178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.412190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.026 [2024-11-20 07:27:32.424431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.026 [2024-11-20 07:27:32.424810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.026 [2024-11-20 07:27:32.424838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.026 [2024-11-20 07:27:32.424853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.425069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.425278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.425297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.425336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.425361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.026 [2024-11-20 07:27:32.437775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.026 [2024-11-20 07:27:32.438099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.026 [2024-11-20 07:27:32.438132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.026 [2024-11-20 07:27:32.438148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.438377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.438589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.438622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.438635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.438647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.026 [2024-11-20 07:27:32.451119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.026 [2024-11-20 07:27:32.451489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.026 [2024-11-20 07:27:32.451519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.026 [2024-11-20 07:27:32.451535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.026 [2024-11-20 07:27:32.451751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.026 [2024-11-20 07:27:32.452013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.026 [2024-11-20 07:27:32.452051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.026 [2024-11-20 07:27:32.452064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.026 [2024-11-20 07:27:32.452077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.285 [2024-11-20 07:27:32.464401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.285 [2024-11-20 07:27:32.464780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.285 [2024-11-20 07:27:32.464809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.285 [2024-11-20 07:27:32.464825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.285 [2024-11-20 07:27:32.465060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.465264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.465285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.465297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.465337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.477513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.477885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.477912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.477927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.478145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.478367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.478388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.478401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.478413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.490610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.490954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.490984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.491000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.491235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.491470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.491490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.491503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.491515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.503996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.504362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.504392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.504409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.504645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.504850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.504871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.504883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.504895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.517106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.517459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.517489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.517505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.517740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.517963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.517983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.518000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.518013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.530224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.530662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.530691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.530707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.530942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.531146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.531166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.531179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.531191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.543309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.543725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.543755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.543770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.544007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.544212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.544232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.544244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.544256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.556430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.556795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.556823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.556838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.557052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.557256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.557274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.557315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.286 [2024-11-20 07:27:32.557331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.286 [2024-11-20 07:27:32.569555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.286 [2024-11-20 07:27:32.569968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.286 [2024-11-20 07:27:32.569997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.286 [2024-11-20 07:27:32.570012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.286 [2024-11-20 07:27:32.570244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.286 [2024-11-20 07:27:32.570480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.286 [2024-11-20 07:27:32.570501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.286 [2024-11-20 07:27:32.570513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.570526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.582722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.583095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.583124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.583139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.583387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.583613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.583634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.583646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.583658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.595727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.596071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.596099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.596115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.596364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.596564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.596585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.596598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.596625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.608851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.609196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.609225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.609246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.609495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.609699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.609719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.609731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.609743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.621964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.622315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.622345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.622360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.622595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.622798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.622819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.622832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.622845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.635360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.635734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.635764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.635781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.636014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.636253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.636289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.636313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.636344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.648774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.649118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.649147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.649163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.649403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.649644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.649679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.649692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.649704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.662225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.662613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.662642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.662658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.662895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.663084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.663104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.287 [2024-11-20 07:27:32.663116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.287 [2024-11-20 07:27:32.663128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.287 [2024-11-20 07:27:32.675386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.287 [2024-11-20 07:27:32.675790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.287 [2024-11-20 07:27:32.675818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.287 [2024-11-20 07:27:32.675834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.287 [2024-11-20 07:27:32.676070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.287 [2024-11-20 07:27:32.676276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.287 [2024-11-20 07:27:32.676320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.288 [2024-11-20 07:27:32.676334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.288 [2024-11-20 07:27:32.676365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.288 [2024-11-20 07:27:32.688645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.288 [2024-11-20 07:27:32.689035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.288 [2024-11-20 07:27:32.689090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.288 [2024-11-20 07:27:32.689106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.288 [2024-11-20 07:27:32.689368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.288 [2024-11-20 07:27:32.689584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.288 [2024-11-20 07:27:32.689619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.288 [2024-11-20 07:27:32.689636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.288 [2024-11-20 07:27:32.689649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.288 [2024-11-20 07:27:32.702017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.288 [2024-11-20 07:27:32.702436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.288 [2024-11-20 07:27:32.702485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.288 [2024-11-20 07:27:32.702501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.288 [2024-11-20 07:27:32.702736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.288 [2024-11-20 07:27:32.702925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.288 [2024-11-20 07:27:32.702944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.288 [2024-11-20 07:27:32.702956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.288 [2024-11-20 07:27:32.702968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.288 [2024-11-20 07:27:32.715670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.547 [2024-11-20 07:27:32.716203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.547 [2024-11-20 07:27:32.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.547 [2024-11-20 07:27:32.716273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.547 [2024-11-20 07:27:32.716522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.547 [2024-11-20 07:27:32.716757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.547 [2024-11-20 07:27:32.716778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.547 [2024-11-20 07:27:32.716790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.547 [2024-11-20 07:27:32.716817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.547 [2024-11-20 07:27:32.728880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.547 [2024-11-20 07:27:32.729337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.547 [2024-11-20 07:27:32.729393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.547 [2024-11-20 07:27:32.729409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.547 [2024-11-20 07:27:32.729662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.547 [2024-11-20 07:27:32.729868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.547 [2024-11-20 07:27:32.729897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.547 [2024-11-20 07:27:32.729910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.547 [2024-11-20 07:27:32.729921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.547 [2024-11-20 07:27:32.742034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.547 [2024-11-20 07:27:32.742440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.547 [2024-11-20 07:27:32.742468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.547 [2024-11-20 07:27:32.742484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.547 [2024-11-20 07:27:32.742719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.547 [2024-11-20 07:27:32.742924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.547 [2024-11-20 07:27:32.742943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.547 [2024-11-20 07:27:32.742956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.742968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.755224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.755674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.755719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.755735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.755971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.756160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.756180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.756192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.756204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.768454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.768939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.768992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.769008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.769276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.769500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.769522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.769534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.769547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.781548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.781972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.782000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.782024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.782260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.782491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.782512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.782525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.782537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.794733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.795123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.795180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.795196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.795454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.795649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.795669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.795683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.795695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.807816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.808163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.808192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.808208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.808477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.808686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.808707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.808719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.808731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.821154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.821469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.821499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.821515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.821746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.821972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.821993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.822005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.822017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.834515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.834965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.834994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.835010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.835245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.835493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.835518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.835532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.835546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.847724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.848081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.848109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.848124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.848375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.848590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.548 [2024-11-20 07:27:32.848611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.548 [2024-11-20 07:27:32.848625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.548 [2024-11-20 07:27:32.848651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.548 [2024-11-20 07:27:32.860747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.548 [2024-11-20 07:27:32.861096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.548 [2024-11-20 07:27:32.861124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.548 [2024-11-20 07:27:32.861141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.548 [2024-11-20 07:27:32.861389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.548 [2024-11-20 07:27:32.861613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.861633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.861651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.861664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.873742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.874090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.874118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.874134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.874384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.874593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.874628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.874640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.874652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.886754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.887117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.887145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.887161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.887415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.887622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.887656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.887669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.887681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.900036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.900432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.900463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.900480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.900713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.900929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.900950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.900964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.900977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.913159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.913506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.913536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.913552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.913784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.913988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.914008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.914021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.914033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.926321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.926669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.926698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.926715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.926951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.927155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.927175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.927187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.927199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.939569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.939931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.939959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.939975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.940210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.940439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.940462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.940476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.940489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.952682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.953039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.549 [2024-11-20 07:27:32.953068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.549 [2024-11-20 07:27:32.953089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.549 [2024-11-20 07:27:32.953339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.549 [2024-11-20 07:27:32.953534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.549 [2024-11-20 07:27:32.953553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.549 [2024-11-20 07:27:32.953565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.549 [2024-11-20 07:27:32.953578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.549 [2024-11-20 07:27:32.965861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.549 [2024-11-20 07:27:32.966156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.550 [2024-11-20 07:27:32.966197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.550 [2024-11-20 07:27:32.966212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.550 [2024-11-20 07:27:32.966453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.550 [2024-11-20 07:27:32.966677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.550 [2024-11-20 07:27:32.966698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.550 [2024-11-20 07:27:32.966711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.550 [2024-11-20 07:27:32.966722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.809 [2024-11-20 07:27:32.979423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.809 [2024-11-20 07:27:32.979893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.809 [2024-11-20 07:27:32.979945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.809 [2024-11-20 07:27:32.979960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.809 [2024-11-20 07:27:32.980203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.809 [2024-11-20 07:27:32.980423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.809 [2024-11-20 07:27:32.980442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.809 [2024-11-20 07:27:32.980455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.809 [2024-11-20 07:27:32.980467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.809 [2024-11-20 07:27:32.992581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.809 [2024-11-20 07:27:32.993022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.809 [2024-11-20 07:27:32.993065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.809 [2024-11-20 07:27:32.993081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.809 [2024-11-20 07:27:32.993360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.809 [2024-11-20 07:27:32.993559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.809 [2024-11-20 07:27:32.993578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.809 [2024-11-20 07:27:32.993589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.809 [2024-11-20 07:27:32.993600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.005905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.006273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.006326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.006344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.006601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.006812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.006831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.006843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.006854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.019084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.019493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.019535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.019551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.019789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.019983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.020002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.020014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.020025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.032439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.032754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.032795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.032811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.033028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.033238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.033257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.033270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.033301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.045806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.046214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.046255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.046271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.046524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.046740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.046759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.046771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.046782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.059031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.059415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.059457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.059472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.059723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.059917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.059935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.059947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.059958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.072184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.072600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.072642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.072657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.072892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.073086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.073105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.073117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.073128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.085287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.085629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.085657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.085673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.085896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.086106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.086125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.086137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.086148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.098472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.098834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.098876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.098891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.099143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.099363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.099382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.099394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.810 [2024-11-20 07:27:33.099405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.810 [2024-11-20 07:27:33.111511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.810 [2024-11-20 07:27:33.111916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.810 [2024-11-20 07:27:33.111957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.810 [2024-11-20 07:27:33.111973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.810 [2024-11-20 07:27:33.112208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.810 [2024-11-20 07:27:33.112438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.810 [2024-11-20 07:27:33.112459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.810 [2024-11-20 07:27:33.112472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.112484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.124518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.124883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.124926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.124941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.125214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.125416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.125435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.125447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.125459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.137593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.137974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.138002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.138017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.138246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.138526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.138548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.138561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.138574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.150956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.151289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.151323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.151354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.151597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.151808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.151826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.151838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.151849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.164134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.164513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.164555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.164570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.164820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.165029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.165052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.165064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.165076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.177225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.177584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.177612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.177629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.177881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.178075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.178093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.178105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.178116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.190340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.190688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.190728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.190743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.190991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.191200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.191218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.191230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.191241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.203525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.203874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.203902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.203918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.204153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.204373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.204392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.204404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.204420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.216530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.216955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.217013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.217253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.217495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.217515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.217528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.217540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.811 [2024-11-20 07:27:33.229749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.811 [2024-11-20 07:27:33.230114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.811 [2024-11-20 07:27:33.230157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:29.811 [2024-11-20 07:27:33.230172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:29.811 [2024-11-20 07:27:33.230453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:29.811 [2024-11-20 07:27:33.230666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.811 [2024-11-20 07:27:33.230685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.811 [2024-11-20 07:27:33.230696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.811 [2024-11-20 07:27:33.230708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.071 [2024-11-20 07:27:33.243011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.071 [2024-11-20 07:27:33.243374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.071 [2024-11-20 07:27:33.243402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.071 [2024-11-20 07:27:33.243417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.071 [2024-11-20 07:27:33.243639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.071 [2024-11-20 07:27:33.243849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.071 [2024-11-20 07:27:33.243867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.071 [2024-11-20 07:27:33.243879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.071 [2024-11-20 07:27:33.243890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.071 [2024-11-20 07:27:33.256168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.071 [2024-11-20 07:27:33.256607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.071 [2024-11-20 07:27:33.256650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.071 [2024-11-20 07:27:33.256666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.071 [2024-11-20 07:27:33.256905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.071 [2024-11-20 07:27:33.257113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.071 [2024-11-20 07:27:33.257131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.071 [2024-11-20 07:27:33.257143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.071 [2024-11-20 07:27:33.257154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.071 [2024-11-20 07:27:33.269264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.071 [2024-11-20 07:27:33.269715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.071 [2024-11-20 07:27:33.269766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.071 [2024-11-20 07:27:33.269781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.071 [2024-11-20 07:27:33.270029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.071 [2024-11-20 07:27:33.270239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.071 [2024-11-20 07:27:33.270258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.071 [2024-11-20 07:27:33.270270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.071 [2024-11-20 07:27:33.270281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.071 4434.20 IOPS, 17.32 MiB/s [2024-11-20T06:27:33.504Z] [2024-11-20 07:27:33.282381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.071 [2024-11-20 07:27:33.282852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.071 [2024-11-20 07:27:33.282907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.071 [2024-11-20 07:27:33.282923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.071 [2024-11-20 07:27:33.283186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.071 [2024-11-20 07:27:33.283389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.071 [2024-11-20 07:27:33.283409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.071 [2024-11-20 07:27:33.283420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.071 [2024-11-20 07:27:33.283432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.071 [2024-11-20 07:27:33.295489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.071 [2024-11-20 07:27:33.295838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.071 [2024-11-20 07:27:33.295880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.071 [2024-11-20 07:27:33.295895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.071 [2024-11-20 07:27:33.296151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.071 [2024-11-20 07:27:33.296353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.071 [2024-11-20 07:27:33.296373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.071 [2024-11-20 07:27:33.296385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.071 [2024-11-20 07:27:33.296396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.071 [2024-11-20 07:27:33.308704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.309069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.309111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.309127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.309385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.309586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.309618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.309630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.309642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.321726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.322150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.322192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.322208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.322460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.322674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.322692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.322704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.322715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.334904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.335269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.335296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.335320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.335557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.335766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.335790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.335802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.335813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.348037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.348367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.348394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.348409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.348633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.348844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.348863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.348875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.348886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.361102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.361471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.361514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.361530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.361783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.361991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.362009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.362021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.362033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.374179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.374610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.374647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.374679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.374932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.375126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.375144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.375156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.375171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.387266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.387572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.387598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.387613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.387808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.388053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.388072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.388085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.388096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.400687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.401174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.401225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.401240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.401479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.401721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.072 [2024-11-20 07:27:33.401740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.072 [2024-11-20 07:27:33.401753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.072 [2024-11-20 07:27:33.401764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.072 [2024-11-20 07:27:33.413881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.072 [2024-11-20 07:27:33.414357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.072 [2024-11-20 07:27:33.414385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.072 [2024-11-20 07:27:33.414400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.072 [2024-11-20 07:27:33.414647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.072 [2024-11-20 07:27:33.414840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.414858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.414870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.414882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.073 [2024-11-20 07:27:33.427135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.073 [2024-11-20 07:27:33.427552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.073 [2024-11-20 07:27:33.427593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.073 [2024-11-20 07:27:33.427609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.073 [2024-11-20 07:27:33.427845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.073 [2024-11-20 07:27:33.428039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.428057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.428068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.428080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.073 [2024-11-20 07:27:33.440424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.073 [2024-11-20 07:27:33.440773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.073 [2024-11-20 07:27:33.440816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.073 [2024-11-20 07:27:33.440830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.073 [2024-11-20 07:27:33.441081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.073 [2024-11-20 07:27:33.441290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.441319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.441332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.441343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.073 [2024-11-20 07:27:33.453563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.073 [2024-11-20 07:27:33.453944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.073 [2024-11-20 07:27:33.453973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.073 [2024-11-20 07:27:33.453989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.073 [2024-11-20 07:27:33.454229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.073 [2024-11-20 07:27:33.454448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.454468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.454480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.454491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.073 [2024-11-20 07:27:33.466687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.073 [2024-11-20 07:27:33.467061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.073 [2024-11-20 07:27:33.467103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.073 [2024-11-20 07:27:33.467118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.073 [2024-11-20 07:27:33.467389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.073 [2024-11-20 07:27:33.467589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.467624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.467636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.467648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.073 [2024-11-20 07:27:33.479923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.073 [2024-11-20 07:27:33.480280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.073 [2024-11-20 07:27:33.480314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.073 [2024-11-20 07:27:33.480332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.073 [2024-11-20 07:27:33.480547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.073 [2024-11-20 07:27:33.480781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.480801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.480813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.480825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.073 [2024-11-20 07:27:33.493341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.073 [2024-11-20 07:27:33.493698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.073 [2024-11-20 07:27:33.493741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.073 [2024-11-20 07:27:33.493756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.073 [2024-11-20 07:27:33.494026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.073 [2024-11-20 07:27:33.494244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.073 [2024-11-20 07:27:33.494264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.073 [2024-11-20 07:27:33.494276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.073 [2024-11-20 07:27:33.494287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.506764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.507142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.507186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.507201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.507456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.507682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.507706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.507720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.507732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.520072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.520405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.520432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.520447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.520648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.520863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.520882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.520894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.520905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.533439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.533800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.533842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.533857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.534112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.534328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.534348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.534360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.534371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.546644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.547016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.547058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.547074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.547339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.547539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.547558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.547571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.547587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.559799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.560212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.560253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.560270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.560532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.560743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.560762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.560774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.560785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.572999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.573371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.573435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.573477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.573708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.573902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.573920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.573932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.573944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.586140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.586513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.586542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.586558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.586799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.587009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.587028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.587040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.587051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.599499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.599828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.599859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.334 [2024-11-20 07:27:33.599875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.334 [2024-11-20 07:27:33.600108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.334 [2024-11-20 07:27:33.600350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.334 [2024-11-20 07:27:33.600372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.334 [2024-11-20 07:27:33.600385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.334 [2024-11-20 07:27:33.600397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.334 [2024-11-20 07:27:33.612809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.334 [2024-11-20 07:27:33.613187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.334 [2024-11-20 07:27:33.613229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.613244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.613499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.613734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.613753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.613766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.613777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.626163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.626544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.626586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.626601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.626851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.627049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.627068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.627081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.627092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.639570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.639951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.639979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.639995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.640244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.640498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.640519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.640532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.640544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.652727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.653089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.653132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.653147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.653421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.653644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.653663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.653675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.653686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.665931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.666312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.666356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.666372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.666627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.666822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.666840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.666852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.666863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.679103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.679484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.679527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.679543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.679795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.680004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.680027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.680040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.680052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.692211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.692728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.692770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.692786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.693037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.693231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.693249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.693261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.693272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.705466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.335 [2024-11-20 07:27:33.705894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.335 [2024-11-20 07:27:33.705937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.335 [2024-11-20 07:27:33.705954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.335 [2024-11-20 07:27:33.706194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.335 [2024-11-20 07:27:33.706439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.335 [2024-11-20 07:27:33.706458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.335 [2024-11-20 07:27:33.706471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.335 [2024-11-20 07:27:33.706482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.335 [2024-11-20 07:27:33.718721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.336 [2024-11-20 07:27:33.719086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.336 [2024-11-20 07:27:33.719114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.336 [2024-11-20 07:27:33.719129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.336 [2024-11-20 07:27:33.719376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.336 [2024-11-20 07:27:33.719585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.336 [2024-11-20 07:27:33.719604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.336 [2024-11-20 07:27:33.719616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.336 [2024-11-20 07:27:33.719627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.336 [2024-11-20 07:27:33.731923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.336 [2024-11-20 07:27:33.732354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.336 [2024-11-20 07:27:33.732384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.336 [2024-11-20 07:27:33.732400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.336 [2024-11-20 07:27:33.732657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.336 [2024-11-20 07:27:33.732868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.336 [2024-11-20 07:27:33.732886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.336 [2024-11-20 07:27:33.732899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.336 [2024-11-20 07:27:33.732910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.336 [2024-11-20 07:27:33.745041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.336 [2024-11-20 07:27:33.745408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.336 [2024-11-20 07:27:33.745451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.336 [2024-11-20 07:27:33.745466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.336 [2024-11-20 07:27:33.745715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.336 [2024-11-20 07:27:33.745924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.336 [2024-11-20 07:27:33.745942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.336 [2024-11-20 07:27:33.745954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.336 [2024-11-20 07:27:33.745965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.336 [2024-11-20 07:27:33.758059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.336 [2024-11-20 07:27:33.758423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.336 [2024-11-20 07:27:33.758451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.336 [2024-11-20 07:27:33.758466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.336 [2024-11-20 07:27:33.758711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.336 [2024-11-20 07:27:33.758939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.336 [2024-11-20 07:27:33.758960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.336 [2024-11-20 07:27:33.758973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.336 [2024-11-20 07:27:33.758985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.596 [2024-11-20 07:27:33.771547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.596 [2024-11-20 07:27:33.771917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-11-20 07:27:33.771964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.596 [2024-11-20 07:27:33.771980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.596 [2024-11-20 07:27:33.772247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.596 [2024-11-20 07:27:33.772492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.596 [2024-11-20 07:27:33.772512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.596 [2024-11-20 07:27:33.772525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.596 [2024-11-20 07:27:33.772537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.596 [2024-11-20 07:27:33.784699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.596 [2024-11-20 07:27:33.785128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-11-20 07:27:33.785156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.596 [2024-11-20 07:27:33.785187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.596 [2024-11-20 07:27:33.785439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.596 [2024-11-20 07:27:33.785654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.596 [2024-11-20 07:27:33.785673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.596 [2024-11-20 07:27:33.785686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.596 [2024-11-20 07:27:33.785697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.596 [2024-11-20 07:27:33.798251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.596 [2024-11-20 07:27:33.798594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-11-20 07:27:33.798633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.596 [2024-11-20 07:27:33.798664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.596 [2024-11-20 07:27:33.798887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.596 [2024-11-20 07:27:33.799102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.596 [2024-11-20 07:27:33.799121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.596 [2024-11-20 07:27:33.799134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.596 [2024-11-20 07:27:33.799146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.596 [2024-11-20 07:27:33.811891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.596 [2024-11-20 07:27:33.812217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-11-20 07:27:33.812245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.596 [2024-11-20 07:27:33.812260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.596 [2024-11-20 07:27:33.812506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.596 [2024-11-20 07:27:33.812734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.596 [2024-11-20 07:27:33.812754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.596 [2024-11-20 07:27:33.812767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.596 [2024-11-20 07:27:33.812779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.596 [2024-11-20 07:27:33.825545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.596 [2024-11-20 07:27:33.825871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.596 [2024-11-20 07:27:33.825914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.596 [2024-11-20 07:27:33.825930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.596 [2024-11-20 07:27:33.826160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.596 [2024-11-20 07:27:33.826415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.826437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.826450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.826463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 [2024-11-20 07:27:33.839240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.839586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.839615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.839630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.839860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.840094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.840113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.840125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.840136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 [2024-11-20 07:27:33.852608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.852983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.853010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.853025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.853259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.853499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.853521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.853540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.853553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2610255 Killed "${NVMF_APP[@]}" "$@" 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.597 [2024-11-20 07:27:33.866085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.866419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.866448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.866463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.866697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.866919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.866939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.866952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.866964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2611289 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2611289 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2611289 ']' 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:30.597 07:27:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.597 [2024-11-20 07:27:33.879465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.879917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.879945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.879961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.880204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.880450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.880477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.880492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.880504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 [2024-11-20 07:27:33.892914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.893312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.893355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.893372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.893586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.893842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.893863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.893877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.893894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 [2024-11-20 07:27:33.906186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.906589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.906627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.906644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.906888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.907089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.907108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.907121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.597 [2024-11-20 07:27:33.907132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.597 [2024-11-20 07:27:33.916834] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:30.597 [2024-11-20 07:27:33.916904] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.597 [2024-11-20 07:27:33.919665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.597 [2024-11-20 07:27:33.920163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.597 [2024-11-20 07:27:33.920205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.597 [2024-11-20 07:27:33.920221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.597 [2024-11-20 07:27:33.920486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.597 [2024-11-20 07:27:33.920713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.597 [2024-11-20 07:27:33.920733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.597 [2024-11-20 07:27:33.920746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:33.920757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:33.933221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:33.933586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:33.933617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:33.933633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:33.933865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:33.934081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:33.934101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:33.934113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:33.934125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:33.946567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:33.946870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:33.946913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:33.946929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:33.947152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:33.947398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:33.947419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:33.947431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:33.947443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:33.959929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:33.960325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:33.960368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:33.960384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:33.960613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:33.960829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:33.960848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:33.960861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:33.960877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:33.973195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:33.973641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:33.973670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:33.973686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:33.973915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:33.974130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:33.974149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:33.974161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:33.974173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:33.986438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:33.986737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:33.986779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:33.986795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:33.987017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:33.987233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:33.987252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:33.987265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:33.987277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:33.989570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:30.598 [2024-11-20 07:27:33.999787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:34.000271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:34.000356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:34.000399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:34.000645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:34.000851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:34.000871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:34.000886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:34.000900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.598 [2024-11-20 07:27:34.013095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.598 [2024-11-20 07:27:34.013654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.598 [2024-11-20 07:27:34.013687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.598 [2024-11-20 07:27:34.013706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.598 [2024-11-20 07:27:34.013955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.598 [2024-11-20 07:27:34.014172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.598 [2024-11-20 07:27:34.014192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.598 [2024-11-20 07:27:34.014205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.598 [2024-11-20 07:27:34.014218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.858 [2024-11-20 07:27:34.026953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.858 [2024-11-20 07:27:34.027296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.858 [2024-11-20 07:27:34.027347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.858 [2024-11-20 07:27:34.027364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.858 [2024-11-20 07:27:34.027595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.858 [2024-11-20 07:27:34.027812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.858 [2024-11-20 07:27:34.027832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.858 [2024-11-20 07:27:34.027845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.858 [2024-11-20 07:27:34.027857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.858 [2024-11-20 07:27:34.040407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.858 [2024-11-20 07:27:34.040771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.858 [2024-11-20 07:27:34.040800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.858 [2024-11-20 07:27:34.040817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.858 [2024-11-20 07:27:34.041060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.858 [2024-11-20 07:27:34.041275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.041295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.041336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.041350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.048047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.859 [2024-11-20 07:27:34.048090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.859 [2024-11-20 07:27:34.048103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.859 [2024-11-20 07:27:34.048120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.859 [2024-11-20 07:27:34.048145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.859 [2024-11-20 07:27:34.049529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.859 [2024-11-20 07:27:34.049558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.859 [2024-11-20 07:27:34.049562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.859 [2024-11-20 07:27:34.053813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.054224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.054256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.054274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.054505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.054740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.054761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.054778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.054791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.067279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.067791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.067828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.067848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.068088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.068315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.068337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.068353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.068368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.080875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.081408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.081448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.081468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.081709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.081928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.081950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.081966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.081997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.094497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.095008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.095048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.095067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.095320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.095539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.095560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.095577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.095593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.108002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.108478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.108515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.108534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.108771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.108988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.109009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.109025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.109040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.121535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.122075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.122115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.122136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.122386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.122605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.122627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.122644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.122659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.135126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.135616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.135649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.859 [2024-11-20 07:27:34.135667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.859 [2024-11-20 07:27:34.135903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.859 [2024-11-20 07:27:34.136119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.859 [2024-11-20 07:27:34.136140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.859 [2024-11-20 07:27:34.136155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.859 [2024-11-20 07:27:34.136169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.859 [2024-11-20 07:27:34.148703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.859 [2024-11-20 07:27:34.149103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.859 [2024-11-20 07:27:34.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.149150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.149375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.149595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.149616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.149630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.149642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 [2024-11-20 07:27:34.162208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.162542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.162570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.162586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.162801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.163021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.163042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.163055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.163067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.860 [2024-11-20 07:27:34.175787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.176157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.176185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.176201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.176427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.176661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.176683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.176697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.176710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 [2024-11-20 07:27:34.189433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.189798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.189827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.189843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.190058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.190288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.190334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.190349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.190363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.860 [2024-11-20 07:27:34.200025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.860 [2024-11-20 07:27:34.203095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.203464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.203492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.203508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.203739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.203952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.203972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.203991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.204004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.860 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.860 [2024-11-20 07:27:34.216729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.217163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.217195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.217214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.217453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.217688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.217709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.217723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.217736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 [2024-11-20 07:27:34.230187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.230536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.230565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.860 [2024-11-20 07:27:34.230581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.860 [2024-11-20 07:27:34.230810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.860 [2024-11-20 07:27:34.231016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.860 [2024-11-20 07:27:34.231036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.860 [2024-11-20 07:27:34.231049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.860 [2024-11-20 07:27:34.231061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.860 [2024-11-20 07:27:34.243659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.860 [2024-11-20 07:27:34.244090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.860 [2024-11-20 07:27:34.244124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.861 [2024-11-20 07:27:34.244144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.861 [2024-11-20 07:27:34.244379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.861 [2024-11-20 07:27:34.244620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.861 [2024-11-20 07:27:34.244642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.861 [2024-11-20 07:27:34.244669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.861 [2024-11-20 07:27:34.244685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.861 Malloc0 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.861 [2024-11-20 07:27:34.257246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.861 [2024-11-20 07:27:34.257573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.861 [2024-11-20 07:27:34.257602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94fa40 with addr=10.0.0.2, port=4420 00:25:30.861 [2024-11-20 07:27:34.257618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94fa40 is same with the state(6) to be set 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.861 [2024-11-20 07:27:34.257833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94fa40 (9): Bad file descriptor 00:25:30.861 [2024-11-20 07:27:34.258054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.861 [2024-11-20 07:27:34.258075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.861 [2024-11-20 07:27:34.258088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.861 [2024-11-20 07:27:34.258101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.861 [2024-11-20 07:27:34.269337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.861 [2024-11-20 07:27:34.270898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.861 07:27:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2610539 00:25:31.143 3695.17 IOPS, 14.43 MiB/s [2024-11-20T06:27:34.576Z] [2024-11-20 07:27:34.300183] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:33.037 4347.57 IOPS, 16.98 MiB/s [2024-11-20T06:27:37.402Z] 4869.88 IOPS, 19.02 MiB/s [2024-11-20T06:27:38.335Z] 5275.44 IOPS, 20.61 MiB/s [2024-11-20T06:27:39.706Z] 5612.60 IOPS, 21.92 MiB/s [2024-11-20T06:27:40.640Z] 5873.27 IOPS, 22.94 MiB/s [2024-11-20T06:27:41.572Z] 6092.83 IOPS, 23.80 MiB/s [2024-11-20T06:27:42.506Z] 6277.31 IOPS, 24.52 MiB/s [2024-11-20T06:27:43.440Z] 6436.71 IOPS, 25.14 MiB/s [2024-11-20T06:27:43.440Z] 6573.33 IOPS, 25.68 MiB/s 00:25:40.007 Latency(us) 00:25:40.007 [2024-11-20T06:27:43.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:40.007 Verification LBA range: start 0x0 length 0x4000 00:25:40.007 Nvme1n1 : 15.01 6573.54 25.68 10107.84 0.00 7650.17 579.51 22039.51 00:25:40.007 [2024-11-20T06:27:43.440Z] =================================================================================================================== 00:25:40.007 [2024-11-20T06:27:43.440Z] Total : 6573.54 25.68 10107.84 0.00 7650.17 579.51 22039.51 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.265 rmmod nvme_tcp 00:25:40.265 rmmod nvme_fabrics 00:25:40.265 rmmod nvme_keyring 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2611289 ']' 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2611289 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2611289 ']' 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2611289 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2611289 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2611289' 00:25:40.265 killing process with pid 2611289 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2611289 00:25:40.265 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2611289 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.523 07:27:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.057 00:25:43.057 real 0m22.550s 00:25:43.057 user 1m0.168s 00:25:43.057 sys 0m4.201s 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.057 ************************************ 00:25:43.057 END TEST nvmf_bdevperf 00:25:43.057 ************************************ 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:43.057 07:27:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.057 ************************************ 00:25:43.057 START TEST nvmf_target_disconnect 00:25:43.057 ************************************ 00:25:43.057 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:43.057 * Looking for test storage... 00:25:43.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.057 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:43.057 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:25:43.057 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.058 --rc genhtml_branch_coverage=1 00:25:43.058 --rc genhtml_function_coverage=1 00:25:43.058 --rc genhtml_legend=1 00:25:43.058 --rc geninfo_all_blocks=1 00:25:43.058 --rc geninfo_unexecuted_blocks=1 00:25:43.058 00:25:43.058 ' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.058 --rc genhtml_branch_coverage=1 00:25:43.058 --rc genhtml_function_coverage=1 00:25:43.058 --rc genhtml_legend=1 00:25:43.058 --rc geninfo_all_blocks=1 00:25:43.058 --rc geninfo_unexecuted_blocks=1 00:25:43.058 00:25:43.058 ' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.058 --rc genhtml_branch_coverage=1 00:25:43.058 --rc genhtml_function_coverage=1 00:25:43.058 --rc genhtml_legend=1 00:25:43.058 --rc geninfo_all_blocks=1 00:25:43.058 --rc geninfo_unexecuted_blocks=1 00:25:43.058 00:25:43.058 ' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:43.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.058 --rc genhtml_branch_coverage=1 00:25:43.058 --rc genhtml_function_coverage=1 00:25:43.058 --rc genhtml_legend=1 00:25:43.058 --rc geninfo_all_blocks=1 00:25:43.058 --rc geninfo_unexecuted_blocks=1 00:25:43.058 00:25:43.058 ' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.058 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:43.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:43.059 07:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:44.960 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:44.960 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:44.960 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:44.961 Found net devices under 0000:09:00.0: cvl_0_0 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:44.961 Found net devices under 0000:09:00.1: cvl_0_1 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:44.961 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:25:45.220 00:25:45.220 --- 10.0.0.2 ping statistics --- 00:25:45.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.220 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:25:45.220 00:25:45.220 --- 10.0.0.1 ping statistics --- 00:25:45.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.220 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.220 ************************************ 00:25:45.220 START TEST nvmf_target_disconnect_tc1 00:25:45.220 ************************************ 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.220 [2024-11-20 07:27:48.580633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.220 [2024-11-20 07:27:48.580699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x789f40 with addr=10.0.0.2, port=4420 00:25:45.220 [2024-11-20 07:27:48.580736] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.220 [2024-11-20 07:27:48.580756] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.220 [2024-11-20 07:27:48.580770] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:45.220 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:45.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:45.220 Initializing NVMe Controllers 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:25:45.220 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:45.221 00:25:45.221 real 0m0.103s 00:25:45.221 user 0m0.052s 00:25:45.221 sys 0m0.050s 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 ************************************ 00:25:45.221 END TEST nvmf_target_disconnect_tc1 00:25:45.221 ************************************ 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 ************************************ 00:25:45.221 START TEST nvmf_target_disconnect_tc2 00:25:45.221 ************************************ 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2614485 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2614485 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2614485 ']' 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:45.221 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.479 [2024-11-20 07:27:48.696716] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:45.479 [2024-11-20 07:27:48.696789] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.479 [2024-11-20 07:27:48.766271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.479 [2024-11-20 07:27:48.822980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.479 [2024-11-20 07:27:48.823033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.479 [2024-11-20 07:27:48.823061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.479 [2024-11-20 07:27:48.823071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.479 [2024-11-20 07:27:48.823081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.479 [2024-11-20 07:27:48.824600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:45.479 [2024-11-20 07:27:48.824641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:45.479 [2024-11-20 07:27:48.824698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:45.479 [2024-11-20 07:27:48.824702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.738 07:27:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 Malloc0 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 [2024-11-20 07:27:49.005204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 [2024-11-20 07:27:49.033487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2614509 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.738 07:27:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:47.640 07:27:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2614485 00:25:47.640 07:27:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Write completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Write completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Write completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Write completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.640 Read completed with error (sct=0, sc=8) 00:25:47.640 starting I/O failed 00:25:47.641 [2024-11-20 07:27:51.058665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 [2024-11-20 07:27:51.058993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Write completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.641 [2024-11-20 07:27:51.059331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:47.641 Read completed with error (sct=0, sc=8) 00:25:47.641 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Read completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 Write completed with error (sct=0, sc=8) 00:25:47.642 starting I/O failed 00:25:47.642 [2024-11-20 07:27:51.059659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:47.642 [2024-11-20 07:27:51.059870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.059921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.060918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.060945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.061085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.061112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.061238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.061264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.061373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.061401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.061490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.642 [2024-11-20 07:27:51.061518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.642 qpair failed and we were unable to recover it. 00:25:47.642 [2024-11-20 07:27:51.061642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.061669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.061789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.061816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.061955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.061982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.062843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.062965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.063915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.063941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.643 [2024-11-20 07:27:51.064796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.643 qpair failed and we were unable to recover it. 00:25:47.643 [2024-11-20 07:27:51.064928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.064967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.065909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.065935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.066971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.066998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.067864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.067976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.068002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.068090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.068116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.068205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.644 [2024-11-20 07:27:51.068237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.644 qpair failed and we were unable to recover it. 00:25:47.644 [2024-11-20 07:27:51.068379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.068418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.068519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.068547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.068651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.068685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.068814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.068841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.068948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.068975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.069081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.069108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.069253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.069282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.069406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.069454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.069549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.069577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.069700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.645 [2024-11-20 07:27:51.069788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.645 [2024-11-20 07:27:51.069815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.645 qpair failed and we were unable to recover it. 00:25:47.928 [2024-11-20 07:27:51.069924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.928 [2024-11-20 07:27:51.069951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.928 qpair failed and we were unable to recover it. 00:25:47.928 [2024-11-20 07:27:51.070071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.928 [2024-11-20 07:27:51.070097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.928 qpair failed and we were unable to recover it. 00:25:47.928 [2024-11-20 07:27:51.070181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.928 [2024-11-20 07:27:51.070207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.928 qpair failed and we were unable to recover it. 00:25:47.928 [2024-11-20 07:27:51.070294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.928 [2024-11-20 07:27:51.070327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.928 qpair failed and we were unable to recover it. 00:25:47.928 [2024-11-20 07:27:51.070407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.928 [2024-11-20 07:27:51.070434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.928 qpair failed and we were unable to recover it. 00:25:47.928 [2024-11-20 07:27:51.070521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.928 [2024-11-20 07:27:51.070548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.070631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.070658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.070731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.070757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.070846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.070873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.070959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.070992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.071960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.071988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.072105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.072134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.072245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.072271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.072426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.072465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.072616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.072644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.072784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.072828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.073907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.074895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.074986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.075144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.075254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.075407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.075544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.075663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.075844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.075871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.076378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.076492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.076518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.076660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.076686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.076809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.076835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.076920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.076946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.077872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.077998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.078968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.078999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.929 [2024-11-20 07:27:51.079093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.929 [2024-11-20 07:27:51.079121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.929 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.079257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.079390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.079508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.079622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.079760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.079910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.079993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.080019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.080127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.080153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.080313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.080343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.080458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.080485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.080568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.080595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.080723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.080758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.081953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.081981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.082935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.082962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.083103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.083265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.083408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.083517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.083733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.083886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.083989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.084906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.084932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.085882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.085984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.086911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.086937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.930 [2024-11-20 07:27:51.087854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.930 [2024-11-20 07:27:51.087880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.930 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.087996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.088023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.088161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.088188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.088318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.088368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.088464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.088492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.088606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.088633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.088810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.088865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.089943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.089969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.090917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.090944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.091036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.091064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.091151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.091178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.091296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.091328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.091531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.091559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.091707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.091733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.091845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.091872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.092969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.092997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.093138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.093164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.093289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.093339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.093487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.093515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.093644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.093670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.093785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.093811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.093893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.093919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.094959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.094987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.095132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.095277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.095438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.095583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.095730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.095900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.095984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.096012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.096094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.096122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.096242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.931 [2024-11-20 07:27:51.096282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.931 qpair failed and we were unable to recover it. 00:25:47.931 [2024-11-20 07:27:51.096428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.096457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.096553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.096579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.096668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.096695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.096811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.096955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.096981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.097944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.097971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.098855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.098997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.099881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.099989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.100980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.101139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.101482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.101596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.101742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.101873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.101899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.102925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.102952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.103079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.103118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.103232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.103258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.103406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.103445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.103552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.103580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.103698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.103754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.103908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.103976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.104946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.104973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.105095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.105125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.932 qpair failed and we were unable to recover it. 00:25:47.932 [2024-11-20 07:27:51.105221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.932 [2024-11-20 07:27:51.105249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.105336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.105364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.105455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.105482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.105587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.105626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.105831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.105885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.106874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.106900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.107884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.107910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.108905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.108930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.109941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.109967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.110063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.110089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.110218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.110258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.110383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.110413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.110572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.110662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.110689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.110812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.110869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.111970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.111997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.112897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.112924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.113896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.113921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.114050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.933 [2024-11-20 07:27:51.114076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.933 qpair failed and we were unable to recover it. 00:25:47.933 [2024-11-20 07:27:51.114169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.114196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.114316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.114349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.114476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.114504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.114590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.114617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.114726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.114752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.114897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.114924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.115977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.116858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.116979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.117135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.117255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.117417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.117538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.117682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.117863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.117890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.118954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.118980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.119915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.119991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.120968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.120994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.121210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.121354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.121493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.121686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.121810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.934 [2024-11-20 07:27:51.121923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.934 qpair failed and we were unable to recover it. 00:25:47.934 [2024-11-20 07:27:51.121999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.122025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.122133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.122160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.122268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.122315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.122445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.122473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.122587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.122613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.122808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.122834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.122979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.123849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.123995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.124115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.124251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.124396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.124552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.124709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.124875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.124901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.125954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.125981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.126957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.126982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.127961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.127988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.128184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.128348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.128522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.128636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.128755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.128895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.128980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.129875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.129999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.130061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.130169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.130197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.130339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.130368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.130494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.130521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.130647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.130674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.130858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.130914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.131028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.131092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.131230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.131256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.935 qpair failed and we were unable to recover it. 00:25:47.935 [2024-11-20 07:27:51.131350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-11-20 07:27:51.131377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.131483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.131509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.131625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.131651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.131734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.131761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.131883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.131910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.131996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.132118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.132296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.132488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.132636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.132784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.132926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.132953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.133921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.133947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.134092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.134118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.134250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.134289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.134395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.134424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.134636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.134675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.134872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.134901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.134991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.135893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.135919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.136799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.136825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.137946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.137973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.138103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.138143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.138266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.138294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.138496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.138524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.138641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.138669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.138806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.138832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.138924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.139098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.139237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.139431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.139556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.139702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.139838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.139979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.140005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.140113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.140139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.140224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.140252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-11-20 07:27:51.140401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.936 qpair failed and we were unable to recover it. 00:25:47.936 [2024-11-20 07:27:51.140495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.140524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.140643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.140669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.140810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.140836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.140947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.140973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.141854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.141999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.142931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.142957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.143961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.143987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.144088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.144127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.144273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.144301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.144392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.144418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.144532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.144655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.144681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.144852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.144907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.145889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.145977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.146937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.146965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.147092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.147132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.147259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.147298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.147428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.147456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.147571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.147598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.147740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.147766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.147980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.148093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.148121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.148253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.148292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.148443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.148471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.148586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.148618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.148741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.148793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.937 qpair failed and we were unable to recover it. 00:25:47.937 [2024-11-20 07:27:51.148902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-11-20 07:27:51.148927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.149884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.149911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.150944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.150970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.151866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.151892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.152887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.152974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.153843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.153870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.154866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.154892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.155877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.155989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.156134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.156244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.156407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.156545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.156684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.156862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.156888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.157010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.157037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.157152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.157178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.157265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.157291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.157410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.157437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.938 qpair failed and we were unable to recover it. 00:25:47.938 [2024-11-20 07:27:51.157547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.938 [2024-11-20 07:27:51.157574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.157669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.157696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.157787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.157814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.157931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.157958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.158926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.158952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.159957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.159985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.160908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.160936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.161919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.161946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.162943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.162970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.163927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.163953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.164914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.164942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.165032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.165058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.165177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.165209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.165333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.165360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.939 [2024-11-20 07:27:51.165494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.939 [2024-11-20 07:27:51.165520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.939 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.165637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.165663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.165773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.165799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.165884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.165911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.166966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.166993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.167958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.167984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.168104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.168132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.168284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.168332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.168430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.168458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.168573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.168601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.168735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.168785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.168940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.168993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.169898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.169924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.170853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.170964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.171882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.171998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.172932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.172958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.173888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.173915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.174069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.174096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.174186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.174213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.940 qpair failed and we were unable to recover it. 00:25:47.940 [2024-11-20 07:27:51.174361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.940 [2024-11-20 07:27:51.174388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.174465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.174492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.174584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.174615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.174729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.174756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.174869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.174897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.175880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.175990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.176154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.176271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.176387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.176526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.176692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.176853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.176902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.177951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.177977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.178923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.178950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.179824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.179875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.180894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.180933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.181869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.181896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.182934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.941 [2024-11-20 07:27:51.182962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.941 qpair failed and we were unable to recover it. 00:25:47.941 [2024-11-20 07:27:51.183071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.183232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.183411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.183529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.183650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.183784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.183897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.183924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.184905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.184990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.185154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.185278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.185478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.185598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.185828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.185967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.185993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.186133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.186160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.186237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.186264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.186386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.186413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.186606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.186632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.186798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.186824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.186936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.186962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.187118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.187290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.187474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.187608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.187776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.187891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.187976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.188953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.188978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.189089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.189237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.189385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.189570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.189738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.189857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.189975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.190920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.190946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.191043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.191073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.191158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.191185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.191313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.191342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.191431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.191459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.942 [2024-11-20 07:27:51.191555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.942 [2024-11-20 07:27:51.191581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.942 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.191703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.191730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.191824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.191850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.191965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.191993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.192147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.192306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.192452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.192593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.192735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.192869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.192986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.193950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.193977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.194090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.194120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.194241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.194268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.194359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.194386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.194471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.194498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.194714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.194777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.194951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.195108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.195253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.195379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.195500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.195660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.195884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.195946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.196871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.196923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.197034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.197084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.197217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.197361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.197390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.197508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.197542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.197658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.197706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.197857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.197908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.198958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.198984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.199113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.199153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.199313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.199354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.199453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.199481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.199605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.199632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.199750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.199778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.199927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.199973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.200084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.200112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.200226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.200251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.200444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.200470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.943 qpair failed and we were unable to recover it. 00:25:47.943 [2024-11-20 07:27:51.200611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.943 [2024-11-20 07:27:51.200664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.200842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.200893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.201110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.201245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.201389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.201509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.201630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.201810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.201962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.202130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.202263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.202439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.202559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.202733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.202876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.202926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.203968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.203994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.204972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.204998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.205941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.205990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.206944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.206972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.207093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.207121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.207333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.207373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.207471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.207498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.207721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.207913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.207940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.208053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.208078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.208189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.208215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.208372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.208400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.208594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.208620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.208764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.208790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.208952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.208978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.209170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.209195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.209349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.209389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.209513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.209541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.209651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.209679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.209765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.209792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.209876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.209903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.210024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.210051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.210169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.210195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.944 qpair failed and we were unable to recover it. 00:25:47.944 [2024-11-20 07:27:51.210284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.944 [2024-11-20 07:27:51.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.210402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.210429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.210514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.210541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.210692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.210718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.210835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.210861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.210945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.210971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.211075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.211101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.211201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.211239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.211402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.211441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.211544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.211571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.211690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.211718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.211833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.211890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.212113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.212283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.212410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.212527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.212670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.212786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.212954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.213899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.213925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.214921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.214948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.215906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.215932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.216870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.216897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.217916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.217943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.218036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.218063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.218150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.945 [2024-11-20 07:27:51.218177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.945 qpair failed and we were unable to recover it. 00:25:47.945 [2024-11-20 07:27:51.218290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.218330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.218447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.218473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.218606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.218632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.218749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.218776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.218866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.218893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.219929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.219968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.220937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.220964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.221871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.221898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.222927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.222953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.223905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.224019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.224046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.224176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.224215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.224362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.224391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.224514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.224540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.224732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.224758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.224881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.225846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.225985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.226100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.226242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.226408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.226544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.226714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.226922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.226976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.227150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.227201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.946 qpair failed and we were unable to recover it. 00:25:47.946 [2024-11-20 07:27:51.227284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.946 [2024-11-20 07:27:51.227326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.227440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.227467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.227577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.227603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.227722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.227867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.227894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.228850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.228988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.229993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.230164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.230216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.230333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.230362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.230468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.230495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.230577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.230604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.230717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.230744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.230853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.230879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.231869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.231995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.232138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.232307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.232471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.232609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.232722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.232874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.232900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.233898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.233924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.234888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.234974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.947 [2024-11-20 07:27:51.235867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.947 qpair failed and we were unable to recover it. 00:25:47.947 [2024-11-20 07:27:51.235950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.235977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.236955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.236982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.237912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.237938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.238946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.239871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.239985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.240919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.240997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.241837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.241994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.242128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.242155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.242241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.242268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.242386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.948 [2024-11-20 07:27:51.242414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.948 qpair failed and we were unable to recover it. 00:25:47.948 [2024-11-20 07:27:51.242542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.242568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.242658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.242685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.242792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.242819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.242958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.242985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.243958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.243985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.244085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.244124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.244249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.244277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.244419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.244447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.244555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.244581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.244689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.244715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.244867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.244893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.245891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.245920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.246861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.246979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.247941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.247980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.248901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.248928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.249892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.250910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.250936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.251016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.949 [2024-11-20 07:27:51.251042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.949 qpair failed and we were unable to recover it. 00:25:47.949 [2024-11-20 07:27:51.251153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.251268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.251440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.251551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.251675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.251813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.251949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.251976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.252956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.252982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.253957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.253984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.254925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.254951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.255925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.255952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.256848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.256876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.257937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.257970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.258937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.258963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.259042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.259069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.259178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.259205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.259291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.259322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.950 qpair failed and we were unable to recover it. 00:25:47.950 [2024-11-20 07:27:51.259438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.950 [2024-11-20 07:27:51.259464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.259577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.259603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.259720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.259747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.259834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.259863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.259972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.259999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.260105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.260136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.260253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.260281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.260409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.260447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.260596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.260625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.260770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.260825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.260960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.261174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.261344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.261484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.261622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.261733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.261848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.261874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.262894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.262920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.263877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.263915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.264930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.264958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.265940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.265967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.266933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.266959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.951 qpair failed and we were unable to recover it. 00:25:47.951 [2024-11-20 07:27:51.267934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.951 [2024-11-20 07:27:51.267961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.268937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.268963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.269919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.269946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.270091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.270117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.270230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.270256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.270389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.270429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.270552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.270581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.270672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.270699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.270875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.270943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.271242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.271389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.271505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.271617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.271753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.271860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.271994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.272149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.272278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.272408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.272544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.272716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.272872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.273147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.273174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.273268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.273294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.273424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.273451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.273561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.273587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.273718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.273758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.273912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.273961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.274913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.274939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.275998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.276112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.276140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.276272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.276322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.276449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.276477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.276620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.276646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.276813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.276865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.277009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.277055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.277167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.277194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.277275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.952 [2024-11-20 07:27:51.277314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.952 qpair failed and we were unable to recover it. 00:25:47.952 [2024-11-20 07:27:51.277437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.277464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.277550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.277576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.277679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.277719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.277927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.277953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.278958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.278985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.279071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.279097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.279185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.279211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.279312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.279339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.279444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.279470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.279555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.279582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.279700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b39f30 is same with the state(6) to be set 00:25:47.953 [2024-11-20 07:27:51.279869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.279908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.280829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.280855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.281921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.281947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.282101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.282141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.282267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.282295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.282423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.282451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.282570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.282597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.282706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.282732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.282920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.282947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.283064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.283321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.283348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.283511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.283561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.283697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.283748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.283888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.283937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.284099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.284263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.284409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.284554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.284656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.284859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.284995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.285181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.285321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.285445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.285611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.285756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.285944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.285989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.953 qpair failed and we were unable to recover it. 00:25:47.953 [2024-11-20 07:27:51.286101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.953 [2024-11-20 07:27:51.286128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.286247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.286273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.286397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.286424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.286516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.286544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.286735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.286775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.286903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.286956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.287900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.287940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.288207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.288325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.288351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.288489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.288515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.288653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.288702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.288837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.288889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.288969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.288995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.289930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.289957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.290096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.290123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.290260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.290286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.290391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.290417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.290530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.290556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.290663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.290688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.290845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.290900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.291934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.291961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.292893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.292918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.293914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.293963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.294077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.294103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.294202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.294231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.294372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.294399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.294611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.294651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.294835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.294876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.295034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.295080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.295241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.295284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.295439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.295466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.954 [2024-11-20 07:27:51.295577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.954 [2024-11-20 07:27:51.295609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.954 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.295722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.295748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.295852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.295892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.296050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.296088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.296294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.296370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.296488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.296514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.296595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.296622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.296779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.296820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.296956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.297003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.297192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.297231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.297356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.297384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.297504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.297531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.297648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.297674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.297804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.297843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.298048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.298233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.298272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.298446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.298473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.298567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.298593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.298678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.298704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.298844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.298885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.299031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.299087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.299329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.299379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.299491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.299518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.299634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.299662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.299740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.299766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.299914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.299954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.300172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.300212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.300375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.300402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.300550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.300725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.300751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.300864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.300890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.301010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.301050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.301219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.301259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.301388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.301415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.301538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.301565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.301726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.301753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.301889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.301930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.302849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.302980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.303028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.303172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.303232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.303423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.303451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.303544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.303571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.303680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.303706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.303840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.303879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.304091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.304130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.304244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.304282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.304421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.304448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.304545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.304572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.304694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.304720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.304857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.304896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.305038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.305103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.305350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.305376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.305471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.305497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.305658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.305871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.305993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.306033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.306188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.306228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.955 [2024-11-20 07:27:51.306382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.955 [2024-11-20 07:27:51.306423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.955 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.306585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.306624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.306772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.306811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.306940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.306979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.307124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.307163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.307321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.307361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.307523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.307562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.307680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.307720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.307873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.307913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.308066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.308106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.308247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.308286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.308455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.308494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.308678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.308718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.308873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.308914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.309048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.309089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.309267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.309315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.309479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.309518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.309674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.309722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.309847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.309887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.310010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.310052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.310241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.310322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.310497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.310537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.310689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.310729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.310853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.310895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.311032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.311072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.311204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.311243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.311421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.311461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.311590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.311630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.311787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.311958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.311999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.312159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.312199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.312365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.312405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.312541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.312615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.312834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.312900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.313121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.313186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.313333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.313373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.313537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.313576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.313705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.313747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.313924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.313990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.314140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.314181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.314337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.314379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.314545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.314585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.314748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.314787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.314920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.314960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.315144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.315187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.315349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.315393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.315567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.315609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.315778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.315819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.315986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.316029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.316167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.316211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.316406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.316601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.316643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.316819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.316860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.316995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.317037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.317201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.317243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.317403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.317443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.317606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.317646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.317812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.317860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.956 qpair failed and we were unable to recover it. 00:25:47.956 [2024-11-20 07:27:51.318020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.956 [2024-11-20 07:27:51.318059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.318173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.318212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.318344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.318385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.318544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.318586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.318782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.318847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.318993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.319063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.319276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.319324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.319489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.319530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.319682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.319721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.319853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.319893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.320012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.320051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.320200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.320240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.320432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.320472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.320693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.320734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.320938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.320980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.321147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.321188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.321383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.321425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.321570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.321611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.321731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.321773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.322004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.322062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.322225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.322267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.322414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.322456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.322626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.322668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.322865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.322906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.323058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.323099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.323230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.323271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.323445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.323487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.323626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.323668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.323788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.323831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.323997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.324039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.324201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.324242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.324367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.324408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.324577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.324620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.324819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.324897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.325049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.325127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.325397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.325537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.325578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.325753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.325794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.325964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.326005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.326147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.326196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.326364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.326407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.326566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.326607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.326775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.326818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.326977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.327020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.327142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.957 [2024-11-20 07:27:51.327184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.957 qpair failed and we were unable to recover it. 00:25:47.957 [2024-11-20 07:27:51.327347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.327390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.327589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.327632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.327875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.328038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.328080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.328219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.328263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.328416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.328458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.328594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.328637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.328800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.328842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.328988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.329029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.329219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.329263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.329406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.329452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.329656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.329700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.329901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.329945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.330118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.330187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.330380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.330423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.330568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.330610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.330750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.330791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.330988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.331030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.331238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.331352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.331524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.331568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.331734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.331777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.331942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.331985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.332111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.332153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.332327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.332371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.332505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.332546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.332678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.332719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.332877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.332918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.333096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.333137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.333314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.333356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.333493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.333555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.333705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.333749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.333920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.333963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:47.958 [2024-11-20 07:27:51.334100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.958 [2024-11-20 07:27:51.334144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:47.958 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.334282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.334336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.334489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.334540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.334685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.334726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.334953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.335122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.335189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.335371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.335440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.335626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.335669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.335857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.335901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.336062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.336106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.336280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.336333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.336513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.336556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.336696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.336740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.336919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.336965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.337135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.337179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.337314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.337358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.337520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.337564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.337698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.337743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.337873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.337916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.338057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.338133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.338331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.338377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.338546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.338589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.338763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.338807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.338930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.338974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.339120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.339165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.339311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.339357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.339603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.339648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.339787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.339831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.339999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.340043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.340198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.340245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.340430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.340474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.340612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.340655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.340827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.340871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.341005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.341049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.341191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.341237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.341383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.341600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.341644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.341817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.341861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.342036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.342080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.342285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.342337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.342463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.342507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.342710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.342753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.342921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.343154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.343198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.343340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.343384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.343523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.343567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.343703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.343747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.343927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.343971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.344142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.344187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.344394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.344438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.344614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.344658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.344806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.344852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.345024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.345069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.345271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.345355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.345519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.345564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.234 [2024-11-20 07:27:51.345764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.234 [2024-11-20 07:27:51.345828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.234 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.346029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.346073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.346264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.346326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.346534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.346579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.346752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.346796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.346949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.346992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.347138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.347183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.347357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.347401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.347571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.347616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.347751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.347796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.347995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.348039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.348270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.348342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.348568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.348612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.348800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.348847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.349061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.349108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.349242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.349288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.349556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.349637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.349900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.349965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.350186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.350232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.350421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.350469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.350627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.350675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.350879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.350925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.351089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.351132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.351276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.351343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.351492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.351536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.351699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.351742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.351904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.351966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.352196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.352247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.352397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.352443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.352654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.352699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.352864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.352908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.353043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.353107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.353288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.353345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.353504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.353551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.353699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.353745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.353926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.353973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.354142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.354188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.354410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.354454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.354635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.354678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.354807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.354852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.355021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.355066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.355263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.355332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.355500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.355544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.355780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.355826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.356033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.356076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.356285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.356337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.356504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.356552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.356731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.356802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.357030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.357105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.357339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.357386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.357573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.357619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.357786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.357831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.357968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.358014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.358162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.358209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.358393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.358441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.358656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.358702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.358875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.358921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.359109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.359156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.359318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.359365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.359536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.359581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.359737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.359783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.359964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.360010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.235 qpair failed and we were unable to recover it. 00:25:48.235 [2024-11-20 07:27:51.360244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.235 [2024-11-20 07:27:51.360299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.360469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.360541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.360771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.361006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.361076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.361316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.361383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.361554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.361609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.361780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.361827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.362042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.362088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.362297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.362556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.362814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.362861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.363050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.363096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.363240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.363286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.363439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.363488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.363630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.363676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.363883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.363930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.364088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.364135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.364348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.364395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.364532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.364578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.364724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.364771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.364975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.365021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.365156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.365202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.365357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.365405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.365598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.365644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.365791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.365863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.366059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.366106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.366289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.366350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.366560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.366607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.366830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.366877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.367053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.367100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.367279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.367335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.367470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.367518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.367704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.367752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.367974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.368051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.368269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.368336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.368546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.368592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.368777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.368823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.368965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.369013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.369184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.369230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.369424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.369472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.369634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.369681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.369872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.369918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.370125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.370171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.370348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.370398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.370551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.370599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.370778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.370833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.370978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.371027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.371236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.371283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.371457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.371504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.371712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.371759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.371973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.372020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.372231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.372288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.372555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.372631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.372863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.372940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.373189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.236 [2024-11-20 07:27:51.373246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.236 qpair failed and we were unable to recover it. 00:25:48.236 [2024-11-20 07:27:51.373466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.373544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.373784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.373860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.374086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.374143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.374366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.374446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.374734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.374809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.375011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.375068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.375259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.375315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.375529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.375576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.375730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.375778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.376001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.376051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.376212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.376261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.376506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.376556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.376717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.376768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.376966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.377016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.377170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.377221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.377385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.377436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.377632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.377683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.377882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.377933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.378086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.378137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.378331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.378553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.378601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.378814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.378864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.379053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.379102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.379298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.379356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.379513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.379563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.379717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.379767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.379963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.380012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.380187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.380237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.380441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.380492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.380656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.380706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.380868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.380927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.381122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.381171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.381369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.381419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.381610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.381855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.381905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.382057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.382106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.382284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.382343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.382544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.382594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.382756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.382804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.382996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.383044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.383198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.383271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.383515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.383564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.383779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.383829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.383987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.384037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.384206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.384256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.384490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.384541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.384703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.384753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.384972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.385021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.385209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.385268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.385523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.385573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.385789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.385865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.386050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.386110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.386378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.386454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.386643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.386694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.386880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.386931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.387090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.387162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.387421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.387637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.387698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.387889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.387939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.388117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.388191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.388382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.388432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.388604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.388653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.388859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.388908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.237 [2024-11-20 07:27:51.389145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.237 [2024-11-20 07:27:51.389215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.237 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.389408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.389460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.389705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.389794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.389998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.390054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.390284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.390365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.390570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.390620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.390806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.390856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.391037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.391087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.391277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.391347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.391505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.391557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.391781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.391830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.392015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.392065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.392294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.392360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.392526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.392580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.392775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.392828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.392983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.393036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.393211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.393264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.393484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.393537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.393733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.393787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.393994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.394046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.394231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.394284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.394530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.394785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.394837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.395027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.395079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.395283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.395382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.395589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.395641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.395878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.395930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.396073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.396125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.396331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.396385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.396570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.396622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.396764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.396816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.397011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.397064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.397264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.397326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.397543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.397595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.397787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.397848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.398058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.398111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.398347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.398401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.398595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.398649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.398845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.398900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.399099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.399151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.399386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.399439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.399649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.399702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.399932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.399985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.400214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.400269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.400497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.400570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.400815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.400872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.401109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.401180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.401405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.401482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.401788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.401864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.402113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.402170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.402360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.402412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.402580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.402632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.402828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.402908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.403086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.403157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.403341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.403415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.403618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.403670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.403870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.403922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.404132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.404185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.404343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.404396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.404568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.404620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.404855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.404908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.405142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.405194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.405344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.405397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.405558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.238 [2024-11-20 07:27:51.405609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.238 qpair failed and we were unable to recover it. 00:25:48.238 [2024-11-20 07:27:51.405782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.405853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.406127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.406184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.406425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.406504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.406719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.406792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.406968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.407041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.407285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.407348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.407610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.407685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.407921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.407973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.408152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.408207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.408418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.408471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.408668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.408729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.408883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.408938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.409172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.409224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.409475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.409527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.409775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.409828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.410028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.410080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.410256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.410338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.410535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.410612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.410800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.410872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.411119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.411175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.411427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.411503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.411713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.411770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.412000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.412052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.412232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.412284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.412525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.412581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.412735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.412787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.412966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.413017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.413250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.413333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.413516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.413570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.413737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.413789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.413988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.414042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.414197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.414249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.414418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.414471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.414672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.414724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.414982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.415037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.415228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.415301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.415492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.415547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.415762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.415814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.416065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.416117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.416328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.416403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.416626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.416705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.416949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.417005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.417230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.417289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.417553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.417629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.417867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.417947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.418187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.418245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.418539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.418614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.418830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.418911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.419146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.419203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.419475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.419554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.419829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.419914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.420125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.420183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.420436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.420514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.420799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.420875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.421148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.421204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.421450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.421527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.421693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.421766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.422013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.239 [2024-11-20 07:27:51.422069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.239 qpair failed and we were unable to recover it. 00:25:48.239 [2024-11-20 07:27:51.422278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.422341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.422555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.422608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.422823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.422880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.423089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.423145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.423347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.423404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.423610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.423667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.423859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.423919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.424133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.424190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.424436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.424493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.424727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.424783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.424959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.425017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.425275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.425342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.425599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.425655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.425937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.426011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.426222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.426278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.426522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.426579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.426807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.426863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.427032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.427091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.427347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.427404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.427660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.427717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.427946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.428002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.428218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.428274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.428510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.428569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.428791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.428848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.429093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.429149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.429331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.429389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.429627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.429685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.429904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.429961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.430214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.430270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.430548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.430625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.430933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.431010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.431232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.431287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.431536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.431601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.431824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.431898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.432086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.432141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.432362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.432419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.432626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.432682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.432927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.433001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.433227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.433284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.433475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.433531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.433697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.433753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.433952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.434011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.434208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.434264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.434532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.434607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.434850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.434926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.435147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.435203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.435400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.435461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.435670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.435747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.435975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.436206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.436262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.436434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.436491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.436714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.436771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.436976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.437033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.437245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.437313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.437519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.437598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.437896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.437970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.438156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.438212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.438418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.438476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.438714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.438789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.438985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.439041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.439315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.439373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.439585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.439645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.240 [2024-11-20 07:27:51.439900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.240 [2024-11-20 07:27:51.439959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.240 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.440207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.440262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.440491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.440547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.440754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.440810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.440990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.441047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.441209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.441531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.441607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.441845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.441922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.442148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.442204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.442447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.442522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.442748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.442841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.443101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.443158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.443374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.443432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.443652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.443730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.443980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.444036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.444221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.444276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.444527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.444603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.444835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.444891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.445062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.445116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.445358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.445415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.445620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.445695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.445920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.445977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.446235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.446291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.446511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.446586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.446808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.446882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.447102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.447158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.447357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.447418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.447661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.447737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.448043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.448276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.448344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.448575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.448651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.448885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.448960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.449169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.449229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.449517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.449594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.449821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.449897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.450127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.450184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.450473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.450548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.450809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.450886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.451116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.451173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.451398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.451476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.451682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.451756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.452030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.452105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.452381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.452458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.452723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.452799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.453015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.453071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.453285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.453352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.453545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.453601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.453815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.453871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.454111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.454167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.454393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.454470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.454797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.454958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.455011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.455185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.455242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.455529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.455605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.455885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.455959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.456167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.456225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.456497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.456573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.456843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.456918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.457099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.457155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.457408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.457485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.457729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.457805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.458014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.458069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.458283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.458352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.458635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.458710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.241 qpair failed and we were unable to recover it. 00:25:48.241 [2024-11-20 07:27:51.458956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.241 [2024-11-20 07:27:51.459032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.459253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.459327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.459545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.459603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.459861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.459918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.460163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.460437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.460494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.460844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.461055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.461112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.461391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.461467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.461713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.461789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.462033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.462106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.462325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.462631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.462707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.462995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.463069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.463274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.463352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.463576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.463632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.463852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.463908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.464071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.464127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.464325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.464382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.464594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.464651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.464908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.464965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.465185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.465242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.465508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.465565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.465738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.465797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.466055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.466113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.466331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.466391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.466634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.466721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.466924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.466981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.467225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.467282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.467527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.467586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.467791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.467872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.468088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.468145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.468379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.468456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.468751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.468826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.469038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.469096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.469316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.469375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.469551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.469607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.469881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.470091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.470148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.470398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.470476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.470778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.470854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.471069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.471127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.471340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.471397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.471673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.471749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.471942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.471998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.472226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.472282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.472540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.472618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.472864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.472922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.473174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.473230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.473435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.473509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.473719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.473793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.473977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.474035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.474290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.474357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.474594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.474670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.474932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.474990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.475225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.475281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.475586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.475661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.475910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.475985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.476151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.476207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.242 [2024-11-20 07:27:51.476423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.242 [2024-11-20 07:27:51.476502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.242 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.476736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.476812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.477053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.477108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.477283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.477356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.477636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.477709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.477995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.478069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.478259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.478326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.478571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.478654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.478909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.478982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.479208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.479265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.479536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.479611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.479863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.479938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.480158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.480214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.480509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.480584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.480862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.480937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.481191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.481248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.481510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.481567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.481789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.481864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.482111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.482168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.482437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.482513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.482795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.482870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.483050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.483107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.483329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.483386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.483665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.483746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.483951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.484029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.484247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.484314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.484573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.484649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.484903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.484978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.485236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.485292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.485553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.485630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.485887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.485964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.486218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.486275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.486535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.486611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.486860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.486916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.487111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.487168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.487412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.487491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.487739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.487814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.488029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.488103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.488335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.488393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.488652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.488726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.489016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.489093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.489280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.489617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.489692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.489968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.490043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.490319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.490602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.490679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.490903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.490977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.491143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.491209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.491464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.491541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.491877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.492093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.492149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.492358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.492439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.492667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.492725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.492945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.493002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.493229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.493285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.493547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.493622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.493817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.493893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.494104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.494161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.494327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.494386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.494636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.494712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.494947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.495023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.495247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.495314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.243 [2024-11-20 07:27:51.495578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.243 [2024-11-20 07:27:51.495654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.243 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.495943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.496021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.496269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.496336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.496555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.496638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.496836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.496912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.497132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.497187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.497366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.497425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.497680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.497755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.497961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.498017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.498193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.498249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.498439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.498498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.498716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.498775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.498999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.499056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.499276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.499343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.499629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.499705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.499957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.500014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.500273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.500339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.500567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.500644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.500849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.500923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.501141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.501200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.501412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.501487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.501711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.501786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.501992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.502069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.502256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.502337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.502582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.502639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.502855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.502925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.503127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.503184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.503417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.503493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.503745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.503818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.503990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.504046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.504254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.504320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.504607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.504688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.504953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.505031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.505244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.505299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.505553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.505631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.505860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.505917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.506109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.506165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.506349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.506407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.506659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.506735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.506957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.507030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.507245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.507313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.507598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.507674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.507874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.507950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.508148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.508204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.508427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.508504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.508783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.508857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.509040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.509097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.509382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.509457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.509675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.509731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.509946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.510003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.510215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.510271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.510549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.510607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.510837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.510896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.511103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.511158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.511400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.511477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.511742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.511800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.512054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.512110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.512324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.512382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.512627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.512703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.512996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.513052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.513317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.513374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.513573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.513651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.513942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.514017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.244 [2024-11-20 07:27:51.514232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.244 [2024-11-20 07:27:51.514288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.244 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.514510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.514800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.514884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.515102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.515158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.515383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.515460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.515689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.515765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.516026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.516102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.516324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.516397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.516634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.516710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.517002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.517059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.517282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.517368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.517557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.517614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.517816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.517891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.518151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.518206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.518458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.518516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.518717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.518797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.519028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.519085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.519339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.519646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.519722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.519935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.520009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.520200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.520257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.520549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.520625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.520873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.520953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.521132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.521189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.521430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.521520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.521725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.521798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.522079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.522155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.522345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.522404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.522687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.522764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.523022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.523096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.523359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.523436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.523668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.523743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.523987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.524043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.524291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.524357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.524559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.524635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.524831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.524903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.525112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.525169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.525422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.525499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.525781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.525855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.526068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.526123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.526321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.526379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.526597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.526656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.526860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.526946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.527126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.527183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.527475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.527551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.527831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.527906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.528076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.528132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.528354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.528412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.528714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.528790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.528997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.529053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.529272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.529338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.529574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.529648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.529899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.529974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.530200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.530256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.530508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.530582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.530785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.530859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.531062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.531117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.531351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.531430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.531711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.245 [2024-11-20 07:27:51.531786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.245 qpair failed and we were unable to recover it. 00:25:48.245 [2024-11-20 07:27:51.532008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.532063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.532278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.532343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.532578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.532653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.532939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.533013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.533266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.533346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.533557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.533633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.533879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.533956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.534178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.534233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.534484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.534560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.534803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.534881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.535077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.535143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.535417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.535492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.535784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.535859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.536041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.536097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.536384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.536459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.536650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.536706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.536918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.537222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.537278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.537495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.537572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.537790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.537846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.538099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.538155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.538408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.538483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.538715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.538790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.539013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.539069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.539281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.539347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.539574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.539650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.539859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.539934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.540185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.540240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.540476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.540533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.540760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.540836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.541065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.541139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.541349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.541407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.541682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.541756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.541959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.542037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.542245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.542315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.542600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.542676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.542914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.542988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.543228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.543285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.543540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.543615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.543854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.543929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.544176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.544233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.544525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.544601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.544832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.544889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.545077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.545133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.545329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.545386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.545602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.545657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.545837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.545895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.546108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.546164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.546420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.546655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.546713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.546884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.546949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.547137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.547193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.547386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.547443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.547698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.547754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.548005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.548061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.548321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.548650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.548725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.549028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.549103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.549375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.549434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.549698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.549774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.550006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.550081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.550315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.550373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.550614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.550687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.246 qpair failed and we were unable to recover it. 00:25:48.246 [2024-11-20 07:27:51.550933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.246 [2024-11-20 07:27:51.551009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.551269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.551336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.551534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.551608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.551816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.551890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.552099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.552176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.552460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.552535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.552816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.552892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.553102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.553158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.553433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.553509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.553764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.553836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.554081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.554157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.554378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.554467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.554747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.554823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.555041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.555097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.555325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.555383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.555627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.555943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.556021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.556233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.556291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.556560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.556634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.556886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.556960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.557144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.557200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.557363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.557423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.557699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.557776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.558039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.558095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.558358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.558415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.558678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.558753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.558995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.559070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.559259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.559335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.559581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.559656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.559883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.559957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.560140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.560196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.560433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.560510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.560794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.560869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.561063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.561119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.561343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.561400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.561649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.561723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.561934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.561992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.562165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.562222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.562465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.562543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.562760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.562834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.563011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.563069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.563333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.563391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.563681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.563757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.564042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.564121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.564369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.564463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.564703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.564786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.565031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.565107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.565350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.565408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.565652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.565727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.565973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.566047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.566234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.566290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.566565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.566663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.566890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.566948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.567160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.567217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.567459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.567517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.567765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.567821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.567999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.568058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.568281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.568351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.247 [2024-11-20 07:27:51.568575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.247 [2024-11-20 07:27:51.568656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.247 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.568894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.568970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.569189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.569245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.569516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.569575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.569800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.569876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.570097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.570152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.570426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.570502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.570792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.570867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.571113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.571170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.571410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.571494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.571777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.571852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.572105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.572161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.572397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.572474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.572694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.572773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.573019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.573096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.573334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.573414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.573707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.573781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.574029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.574086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.574262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.574337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.574604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.574679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.574882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.574957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.575179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.575235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.575494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.575570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.575814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.575890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.576077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.576132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.576351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.576408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.576671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.576747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.577037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.577111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.577334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.577391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.577681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.577757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.577997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.578073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.578327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.578384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.578621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.578701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.578944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.579020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.579242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.579299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.579540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.579616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.579875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.579950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.580158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.580214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.580434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.580510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.580767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.580823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.581067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.581142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.581354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.581412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.581649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.581724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.582010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.582085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.582251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.582318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.582532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.582608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.582867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.582942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.583195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.583251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.583495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.583571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.583810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.583896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.584106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.584162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.584384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.584464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.584712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.584787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.585003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.585078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.585327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.585386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.585598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.585672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.585886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.585959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.586182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.586239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.586472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.586547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.586798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.586854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.587077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.587133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.587363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.587420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.587600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.587676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.248 qpair failed and we were unable to recover it. 00:25:48.248 [2024-11-20 07:27:51.587945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.248 [2024-11-20 07:27:51.588004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.588224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.588535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.588612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.588853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.588929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.589148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.589204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.589389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.589447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.589666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.589961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.590037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.590236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.590292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.590589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.590665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.590897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.590954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.591209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.591265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.591537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.591612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.591822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.591897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.592111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.592168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.592335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.592393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.592636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.592714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.592932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.592989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.593229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.593286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.593534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.593610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.593869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.593926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.594118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.594175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.594456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.594533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.594805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.594861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.595113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.595169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.595442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.595518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.595780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.595865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.596086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.596142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.596335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.596392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.596591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.596663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.596909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.596983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.597200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.597258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.597597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.597656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.597885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.597943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.598252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.598550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.598626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.598931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.599005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.599215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.599272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.599519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.599595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.599880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.599954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.600186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.600245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.600466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.600541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.600733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.600812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.601052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.601128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.601381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.601438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.601666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.601740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.601950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.602008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.602266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.602332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.602542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.602599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.602785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.602845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.603055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.603112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.603292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.603360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.603577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.603636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.603855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.603912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.604142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.604198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.604386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.604446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.604672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.604729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.604908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.604967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.605154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.605210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.605437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.605494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.605704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.605760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.606031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.606087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.249 qpair failed and we were unable to recover it. 00:25:48.249 [2024-11-20 07:27:51.606284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.249 [2024-11-20 07:27:51.606354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.606579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.606634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.606884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.606939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.607151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.607208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.607442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.607510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.607728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.607805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.608061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.608118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.608330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.608388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.608599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.608682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.608924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.608999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.609219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.609275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.609580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.609659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.609870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.609926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.610173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.610230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.610462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.610538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.610773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.610848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.611059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.611114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.611393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.611470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.611677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.611733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.611962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.612020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.612210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.612267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.612542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.612616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.612822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.612880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.613125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.613183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.613431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.613491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.613768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.613845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.614069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.614126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.614379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.614459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.614688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.614744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.614988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.615044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.615218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.615274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.615488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.615563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.615801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.615876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.616098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.616153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.616331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.616390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.616637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.616713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.616880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.616937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.617157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.617214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.617458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.617533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.617761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.617837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.618060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.618117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.618285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.618373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.618598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.618673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.618956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.619030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.619203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.619268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.619511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.619570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.619826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.619882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.620063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.620119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.620386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.620447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.620668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.620725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.620937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.620993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.621168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.621228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.621472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.621528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.621731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.621807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.621992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.622050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.622246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.622313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.250 [2024-11-20 07:27:51.622538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.250 [2024-11-20 07:27:51.622612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.250 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.622861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.622937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.623159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.623215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.623504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.623579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.623816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.623891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.624143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.624199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.624439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.624497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.624737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.624811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.625061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.625117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.625290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.625359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.625632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.625706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.625943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.626017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.626279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.626360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.626606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.626682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.626969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.627043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.627324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.627381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.627624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.627701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.627985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.628060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.628223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.628279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.628525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.628600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.628849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.628924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.629126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.629182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.629404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.629479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.629717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.629793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.630029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.630104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.630325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.630385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.630668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.630743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.631044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.631231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.631297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.631538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.631613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.631902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.631977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.632206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.632265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.632556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.632632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.632922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.632997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.633219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.633276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.633543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.633617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.633838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.633915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.634122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.634180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.634376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.634457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.634692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.634766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.634945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.635018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.635230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.635287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.635556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.635631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.635868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.635943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.636117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.636174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.636406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.636485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.636734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.636809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.636977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.637030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.637216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.637274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.637466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.637526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.637775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.637831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.638045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.638103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.638275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.638361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.638581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.638639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.638854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.638912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.639167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.639223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.639477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.639553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.639770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.639830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.640055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.640111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.640325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.640382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.640637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.640693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.640933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.641008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.641221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.641279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.641535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.251 [2024-11-20 07:27:51.641612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.251 qpair failed and we were unable to recover it. 00:25:48.251 [2024-11-20 07:27:51.641889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.641965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.642178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.642236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.642506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.642582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.642831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.642907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.643128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.643194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.643410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.643488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.643770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.643848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.644020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.644076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.644288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.644356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.644591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.644667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.644899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.644974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.645191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.645247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.645523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.645599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.645844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.645920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.646095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.646154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.646394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.646471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.646752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.646827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.647015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.647072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.647339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.647396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.647667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.647724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.647978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.648055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.648269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.648335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.252 [2024-11-20 07:27:51.648557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.252 [2024-11-20 07:27:51.648645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.252 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.648884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.648958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.649144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.649199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.649455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.649530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.649713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.649792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.650042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.650117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.650346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.650403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.650611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.650687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.650908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.526 [2024-11-20 07:27:51.650983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.526 qpair failed and we were unable to recover it. 00:25:48.526 [2024-11-20 07:27:51.651188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.651245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.651469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.651544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.651722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.651777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.651988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.652043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.652228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.652284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.652522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.652577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.652754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.652810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.652995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.653051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.653269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.653338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.653563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.653620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.653872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.654102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.654162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.654394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.654473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.654759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.654843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.655095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.655151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.655399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.655476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.655718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.655793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.655971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.656028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.656279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.656347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.656574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.656629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.656871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.656946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.657165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.657223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.657460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.657518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.657697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.657754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.657962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.658019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.658182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.658237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.658459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.658536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.658795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.658852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.659010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.659068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.659276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.659345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.659600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.659657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.659904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.659960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.660182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.660239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.660456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.660531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.660770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.660845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.661058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.661115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.661299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.661367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.661597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.661682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.661928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.662003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.662221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.662277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.662528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.662604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.662773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.662828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.663025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.663081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.663261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.663347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.663638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.663713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.663999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.664073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.664330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.664389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.664667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.664742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.664983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.665057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.665314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.665371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.665607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.665682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.665914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.666202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.666257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.666475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.666560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.666816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.666873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.667084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.667160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.667370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.667429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.667652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.667729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.668013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.668224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.668280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.527 [2024-11-20 07:27:51.668599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.527 [2024-11-20 07:27:51.668682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.527 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.668917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.668991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.669251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.669316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.669515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.669592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.669894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.669972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.670165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.670223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.670503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.670580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.670843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.670919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.671110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.671166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.671344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.671403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.671633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.671707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.671899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.671973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.672233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.672289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.672549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.672623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.672840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.672914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.673099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.673158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.673346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.673405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.673685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.673761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.674029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.674086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.674355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.674434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.674699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.674774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.674940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.674996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.675182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.675237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.675524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.675582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.675800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.675856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.676105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.676161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.676469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.676714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.676770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.676981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.677038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.677276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.677498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.677573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.677871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.677947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.678149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.678205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.678378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.678445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.678637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.678694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.678877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.678935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.679152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.679210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.679471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.679528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.679741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.679817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.680027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.680083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.680283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.680350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.680599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.680675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.680880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.680955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.681210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.681266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.681447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.681503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.681711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.681788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.682009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.682067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.682332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.682390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.682603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.682680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.682886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.682966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.683177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.683233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.683490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.683566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.683810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.683885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.684134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.684191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.684451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.684510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.684751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.684829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.685079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.685135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.685352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.685412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.685642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.685700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.685929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.686006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.686203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.686260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.686483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.528 [2024-11-20 07:27:51.686560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.528 qpair failed and we were unable to recover it. 00:25:48.528 [2024-11-20 07:27:51.686822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.686879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.687094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.687151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.691490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.691575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.691890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.691970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.692238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.692296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.692566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.692624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.692866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.692942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.693228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.693570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.693874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.693952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.694174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.694230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.694454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.694533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.694793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.694870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.695086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.695161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.695386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.695464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.695755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.695831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.696009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.696065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.696287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.696354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.696547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.696605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.696822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.696879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.697107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.697163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.697413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.697489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.697713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.697788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.698021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.698098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.698289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.698357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.698645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.698704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.698963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.699038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.699244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.699300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.699548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.699626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.699891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.699947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.700174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.700230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.700485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.700562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.700809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.700885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.701134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.701191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.701445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.701521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.701809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.701883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.702106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.702162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.702349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.702408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.702636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.702702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.702958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.703291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.703358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.703655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.703730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.703991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.704065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.704270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.704343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.704552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.704627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.704852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.704929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.705148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.705204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.705458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.705534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.705748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.705823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.706106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.706180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.706428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.706505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.706742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.706818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.707088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.707162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.707423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.707498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.707746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.707822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.529 [2024-11-20 07:27:51.708027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.529 [2024-11-20 07:27:51.708083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.529 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.708285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.708354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.708603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.708678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.708901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.708977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.709226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.709282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.709533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.709591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.709817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.709875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.710053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.710110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.710336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.710393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.710654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.710729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.710924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.710980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.711163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.711220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.711437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.711513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.711761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.711837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.712052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.712108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.712316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.712374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.712664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.712738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.712954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.713013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.713229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.713288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.713511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.713568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.713782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.713838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.714114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.714291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.714359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.714537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.714605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.714830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.714886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.715110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.715166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.715433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.715511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.715776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.715851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.716067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.716123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.716340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.716397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.716660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.716739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.716969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.717026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.717245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.717316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.717498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.717576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.717907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.718083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.718142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.718413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.718490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.718796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.718855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.719077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.719134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.719448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.719632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.719688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.719894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.719951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.720181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.720236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.720440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.720497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.720747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.720804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.720984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.721042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.721252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.721320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.721535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.721593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.721762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.721819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.722002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.722060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.722242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.722300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.722566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.722622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.722783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.722839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.723090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.723147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.723348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.723406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.723623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.723680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.723945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.724020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.724270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.724339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.724594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.724668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.724839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.724897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.725117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.725174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.725461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.725537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.725818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.725893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.530 [2024-11-20 07:27:51.726082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.530 [2024-11-20 07:27:51.726148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.530 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.726378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.726436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.726628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.726683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.726909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.726965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.727127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.727184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.727443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.727517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.727759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.727835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.728042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.728099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.728326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.728403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.728651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.728725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.728893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.728951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.729116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.729174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.729421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.729497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.729722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.729797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.730041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.730097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.730352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.730410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.730648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.730705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.730876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.730935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.731125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.731182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.731387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.731465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.731760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.732022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.732333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.732391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.732625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.732701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.732957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.733031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.733281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.733352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.733559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.733615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.733883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.733939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.734145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.734201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.734454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.734531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.734790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.734865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.735039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.735098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.735388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.735465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.735720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.735796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.735979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.736035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.736271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.736338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.736556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.736634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.736875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.736952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.737156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.737214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.737400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.737460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.737673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.737740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.737988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.738044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.738261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.738330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.738541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.738599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.738775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.738835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.739022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.739078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.739282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.739367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.739551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.739608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.739816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.739873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.740095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.740153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.740365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.740422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.740663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.740720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.740971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.741229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.741286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.741578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.741637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.741854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.741910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.742087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.742146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.742348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.742405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.742612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.742669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.742952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.743027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.743280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.743347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.743583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.743660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.743901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.743977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.531 qpair failed and we were unable to recover it. 00:25:48.531 [2024-11-20 07:27:51.744159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.531 [2024-11-20 07:27:51.744216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.744509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.744586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.744846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.744923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.745176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.745232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.745481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.745557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.745815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.745890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.746076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.746132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.746383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.746462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.746706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.746783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.746945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.747003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.747187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.747244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.747516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.747573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.747857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.747932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.748103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.748162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.748328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.748388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.748621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.748700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.748900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.748975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.749189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.749255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.749480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.749556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.749853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.749927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.750146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.750203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.750415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.750492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.750722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.750778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.750951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.751007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.751202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.751259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.751452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.751509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.751751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.751807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.752022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.752079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.752295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.752362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.752575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.752632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.752798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.752856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.753075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.753132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.753345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.753402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.753644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.753963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.754038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.754251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.754338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.754541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.754616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.754831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.755161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.755217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.755483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.755558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.755816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.755891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.756089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.756146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.756395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.756471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.756676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.756752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.756977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.757034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.757227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.757284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.757541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.757599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.757774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.757833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.758094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.758153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.758388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.758464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.758745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.758820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.759036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.759093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.759295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.759362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.759577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.759653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.759833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.759892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.760086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.760145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.760373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.532 [2024-11-20 07:27:51.760431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.532 qpair failed and we were unable to recover it. 00:25:48.532 [2024-11-20 07:27:51.760652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.760718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.760960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.761016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.761242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.761521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.761578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.761850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.761925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.762155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.762433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.762508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.762738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.762795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.762987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.763043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.763234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.763289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.763536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.763614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.763833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.763908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.764114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.764172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.764412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.764488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.764716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.764793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.765006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.765064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.765268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.765334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.765584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.765661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.765952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.766026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.766279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.766346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.766629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.766704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.766896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.766977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.767198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.767254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.767470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.767545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.767835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.767912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.768097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.768153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.768387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.768468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.768713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.768788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.769033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.769107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.769325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.769382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.769585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.769663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.769869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.769946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.770161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.770217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.770492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.770549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.770805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.770863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.771129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.771185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.771390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.771469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.771714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.771790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.772002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.772058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.772319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.772376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.772686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.772754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.772954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.773030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.773214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.773272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.773537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.773594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.773795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.773872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.774055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.774113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.774362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.774419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.774666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.774723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.774971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.775027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.775200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.775257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.775506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.775583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.775784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.775859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.776118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.776175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.776463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.776539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.776809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.776885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.777097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.777154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.777319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.777377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.777612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.777691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.777951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.778025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.778234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.778290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.778539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.778597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.778850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.778924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.533 qpair failed and we were unable to recover it. 00:25:48.533 [2024-11-20 07:27:51.779109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.533 [2024-11-20 07:27:51.779165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.779400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.779475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.779719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.779794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.779979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.780038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.780285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.780355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.780572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.780647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.780938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.781012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.781228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.781285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.781541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.781845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.781920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.782090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.782148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.782337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.782394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.782654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.782711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.782932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.783007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.783231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.783286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.783540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.783617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.783890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.783966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.784188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.784244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.784542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.784627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.784921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.784996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.785181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.785238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.785520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.785596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.785835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.785909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.786123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.786179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.786411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.786486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.786775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.786849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.787021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.787079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.787315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.787373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.787666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.787741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.787989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.788065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.788274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.788342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.788548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.788624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.788926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.789003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.789221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.789276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.789530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.789606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.789840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.789898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.790148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.790205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.790484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.790541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.790750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.790827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.791108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.791183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.791431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.791508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.791807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.791883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.792087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.792144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.792355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.792413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.792702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.792777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.793085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.793160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.793377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.793457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.793700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.793774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.794022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.794097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.794284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.794387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.794649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.794725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.794934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.795009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.795213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.795270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.795502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.795580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.795882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.795938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.796148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.796206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.796440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.796497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.796681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.796737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.797021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.797106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.797264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.797331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.797583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.797660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.534 qpair failed and we were unable to recover it. 00:25:48.534 [2024-11-20 07:27:51.797965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.534 [2024-11-20 07:27:51.798021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.798228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.798284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.798501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.798559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.798817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.798892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.799145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.799201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.799444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.799521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.799810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.799885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.800144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.800200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.800404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.800481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.800762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.800837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.801053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.801129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.801409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.801487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.801737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.801811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.802033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.802089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.802269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.802336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.802519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.802577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.802783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.802840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.803007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.803064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.803246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.803313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.803522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.803579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.803787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.803843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.804060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.804116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.804289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.804355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.804607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.804664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.804890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.804948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.805204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.805261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.805445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.805504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.805752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.805808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.806106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.806182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.806432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.806508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.806750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.806826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.807001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.807060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.807326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.807383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.807599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.807673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.807867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.808130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.808189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.808399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.808475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.808701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.808768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.808952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.809010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.809229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.809285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.809536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.809611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.809891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.809966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.810219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.810276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.810499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.810556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.810786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.810842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.811012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.811072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.811278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.811346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.811625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.811702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.811924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.812001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.812205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.812261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.812465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.812542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.812838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.812914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.813173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.813230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.813458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.813534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.813820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.813897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.814162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.814217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.814505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.814581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.814831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.814906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.535 qpair failed and we were unable to recover it. 00:25:48.535 [2024-11-20 07:27:51.815152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.535 [2024-11-20 07:27:51.815207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.815453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.815528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.815748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.815821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.816103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.816176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.816431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.816507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.816744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.816822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.817090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.817147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.817337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.817397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.817643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.817719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.817945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.818020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.818236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.818293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.818545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.818621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.818859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.818934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.819148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.819204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.819435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.819512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.819695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.819774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.820032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.820107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.820467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.820704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.820779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.821001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.821066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.821330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.821388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.821714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.821961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.822253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.822336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.822577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.822654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.822906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.822980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.823221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.823277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.823572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.823655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.823841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.823916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.824116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.824174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.824413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.824489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.824719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.824792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.824951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.825009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.825270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.825339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.825620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.825697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.825933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.826008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.826194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.826249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.826560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.826647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.826914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.826988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.827159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.827215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.827419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.827497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.827783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.827859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.828052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.828109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.828383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.828443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.828742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.828818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.829047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.829103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.829360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.829438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.829630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.829712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.829942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.830000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.830254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.830322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.830530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.830606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.830899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.830974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.831191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.831246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.831516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.831592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.831838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.831915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.832095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.832387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.832463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.832764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.832839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.833061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.833117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.833298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.833375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.833623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.833699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.833933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.834009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.536 [2024-11-20 07:27:51.834270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.536 [2024-11-20 07:27:51.834343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.536 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.834583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.834660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.834957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.835031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.835278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.835366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.835602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.835677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.835896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.835972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.836165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.836222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.836454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.836531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.836699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.836755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.837007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.837081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.837297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.837368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.837572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.837648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.837814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.837870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.838093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.838150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.838388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.838468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.838662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.838737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.838949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.839007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.839236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.839293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.839484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.839540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.839792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.839848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.840030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.840086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.840336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.840393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.840637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.840711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.840986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.841065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.841281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.841351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.841531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.841609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.841885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.841959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.842176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.842232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.842454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.842530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.842782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.842857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.843089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.843165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.843442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.843520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.843737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.843794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.844048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.844122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.844382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.844459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.844821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.845040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.845097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.845326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.845384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.845682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.845759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.845989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.846267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.846333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.846586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.846644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.846840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.846914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.847167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.847223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.847448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.847504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.847748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.847824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.848050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.848106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.848327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.848384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.848685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.848742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.848979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.849053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.849262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.849330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.849622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.849697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.849928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.850004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.850174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.850232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.850470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.850547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.850778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.850855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.851018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.851075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.851364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.851443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.851677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.851751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.852003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.852059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.852293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.852359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.852638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.852721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.853018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.853097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.537 [2024-11-20 07:27:51.853394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.537 [2024-11-20 07:27:51.853449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.537 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.853679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.853763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.854004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.854081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.854298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.854368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.854590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.854666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.854938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.855014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.855179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.855236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.855490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.855548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.855761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.855839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.856084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.856141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.856294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.856361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.856587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.856663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.856857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.856932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.857151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.857208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.857421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.857498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.857760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.857817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.858012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.858068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.858250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.858328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.858557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.858613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.858783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.858841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.859033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.859088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.859261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.859332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.859620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.859696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.859930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.860217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.860273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.860496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.860573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.860841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.860898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.861093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.861149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.861326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.861383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.861662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.861738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.861945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.862021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.862191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.862247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.862510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.862567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.862852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.862927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.863135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.863191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.863417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.863493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.863692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.863768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.863942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.864000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.864219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.864275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.864515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.864591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.864885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.864961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.865182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.865253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.865492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.865571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.865736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.865793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.866009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.866066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.866289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.866360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.866562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.866637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.866834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.866910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.867125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.867182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.867426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.867502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.867747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.867824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.868038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.868096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.868377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.868453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.868700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.868774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.868997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.869053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.869279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.869349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.869628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.538 [2024-11-20 07:27:51.869705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.538 qpair failed and we were unable to recover it. 00:25:48.538 [2024-11-20 07:27:51.869958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.870033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.870243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.870300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.870595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.870654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.870896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.870971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.871233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.871290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.871546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.871621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.871918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.871994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.872226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.872284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.872544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.872619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.872865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.872939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.873160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.873217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.873444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.873524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.873730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.873807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.874069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.874144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.874343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.874402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.874580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.874639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.874862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.874918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.875125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.875181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.875388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.875466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.875714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.875792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.875984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.876041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.876223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.876278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.876508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.876590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.876819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.876893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.877086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.877151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.877368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.877427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.877643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.877700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.877955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.878012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.878199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.878258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.878489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.878548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.878767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.878824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.879041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.879098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.879273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.879345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.879581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.879639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.879921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.879998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.880251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.880320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.880540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.880615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.880863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.880942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.881118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.881174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.881383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.881464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.881744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.881819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.882049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.882105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.882350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.882408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.882686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.882763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.883009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.883084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.883262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.883335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.883596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.883674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.883933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.884009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.884202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.884258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.884506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.884582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.884862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.884938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.885195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.885251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.885517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.885839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.885915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.886092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.886148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.886348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.886407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.886650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.886726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.886964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.887268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.887339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.887573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.539 [2024-11-20 07:27:51.887630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.539 qpair failed and we were unable to recover it. 00:25:48.539 [2024-11-20 07:27:51.887847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.887926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.888143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.888430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.888507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.888755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.888832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.889040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.889107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.889325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.889383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.889510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.889544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.889647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.889681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.889778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.889812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.889917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.889951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.890079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.890113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.890223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.890257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.890405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.890439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.890688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.890745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.890956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.891014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.891186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.891219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.891345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.891379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.891522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.891556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.891709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.891767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.891966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.892000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.892229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.892262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.892389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.892423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.892527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.892690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.892724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.892867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.892902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.893025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.893060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.893208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.893243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.893378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.893413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.893518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.893553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.893689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.893722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.893942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.893999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.894229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.894292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.894414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.894447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.894579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.894629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.894778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.894814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.894946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.894996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.895153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.895187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.895294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.895336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.895450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.895483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.895622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.895655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.895765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.895798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.895904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.895938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.896930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.896963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.897079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.897112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.897211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.897244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.897392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.897426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.897522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.897555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.897689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.897724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.897860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.540 [2024-11-20 07:27:51.897894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.540 qpair failed and we were unable to recover it. 00:25:48.540 [2024-11-20 07:27:51.898005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.898038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.898164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.898197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.898336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.898371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.898510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.898546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.898737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.898794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.899030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.899088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.899256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.899320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.899440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.899473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.899614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.899649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.899815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.899871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.900119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.900175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.900382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.900416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.900532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.900565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.900729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.900763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.900970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.901031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.901230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.901263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.901395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.901429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.901532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.901566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.901763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.901796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.902034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.902067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.903778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.903810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.903957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.903985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.904890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.904998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.905889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.905999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.906965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.906992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.907916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.907942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.908921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.908947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.909039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.909066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.909146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.909173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.909260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.541 [2024-11-20 07:27:51.909287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.541 qpair failed and we were unable to recover it. 00:25:48.541 [2024-11-20 07:27:51.909380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.909406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.909494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.909521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.909605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.909634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.909742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.909769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.909867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.909894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.910913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.910940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.911849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.911994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.912853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.912882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.913941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.913970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.914896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.914924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.915925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.915953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.916923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.916949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.917032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.917059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.917140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.917167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.542 qpair failed and we were unable to recover it. 00:25:48.542 [2024-11-20 07:27:51.917280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.542 [2024-11-20 07:27:51.917314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.917409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.917435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.917519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.917547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.917639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.917745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.917772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.917861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.917888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.917977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.918944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.918971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.919900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.919949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.920084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.920119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.920251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.920284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.920401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.920429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.920542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.920592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.920724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.920759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.920924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.920958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.921957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.921991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.922907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.922994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.923138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.923283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.923434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.923553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.923677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.923869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.923903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.924040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.924073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.924220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.924254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.924378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.924406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.924512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.924563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.924728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.924765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.924867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.924902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.925037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.925064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.925183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.925210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.925300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.925334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.925423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.543 [2024-11-20 07:27:51.925450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.543 qpair failed and we were unable to recover it. 00:25:48.543 [2024-11-20 07:27:51.925527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.925554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.925630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.925656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.925737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.925763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.925855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.925881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.925967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.925994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.926844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.926985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.927939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.927967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.928112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.928140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.928268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.928294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.928429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.928460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.928569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.928603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.928725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.928768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.928875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.928909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.929091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.929125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.929276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.929442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.929491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.929690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.929725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.929853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.929886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.930957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.930985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.931145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.931185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.931283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.931325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.931446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.931473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.931562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.931588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.931723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.931758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.931871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.931905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.932938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.932979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.933121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.933154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.933272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.933315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.933434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.933480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.933619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.933652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.933770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.933803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.933909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.933942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.934077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.934110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.934223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.934257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.934388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.934422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.934516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.544 [2024-11-20 07:27:51.934542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.544 qpair failed and we were unable to recover it. 00:25:48.544 [2024-11-20 07:27:51.934630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.934657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.934756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.934783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.934930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.934964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.935078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.935112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.935243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.935276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.935419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.935446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.935644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.935677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.935788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.935821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.935978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.936013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.936153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.936187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.936345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.936372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.936467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.936493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.936596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.936623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.936778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.936811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.937007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.937041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.937171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.937211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.937320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.937367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.937454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.937481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.937692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.937724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.937871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.937904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.938062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.938196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.938382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.938502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.938652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.938846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.938981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.939172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.939340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.939458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.939599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.939733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.939898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.939945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.940061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.940096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.940206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.940239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.940391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.940419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.940515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.940542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.940684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.940724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.940861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.940901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.941077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.941110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.941276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.941315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.941427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.941454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.941575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.941602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.941748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.941780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.941893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.941919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.545 qpair failed and we were unable to recover it. 00:25:48.545 [2024-11-20 07:27:51.942043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.545 [2024-11-20 07:27:51.942077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.942224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.942257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.942379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.942500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.942526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.942662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.942705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.942824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.942860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.942974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.943015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.943168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.943201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.943404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.943530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.943556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.943714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.943747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.943881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.943914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.944935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.944984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.945161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.945198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.945387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.945416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.945501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.945528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.945672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.945707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.945898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.945933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.946887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.946995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.947028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.947221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.947255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.947394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.839 [2024-11-20 07:27:51.947429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.839 qpair failed and we were unable to recover it. 00:25:48.839 [2024-11-20 07:27:51.947538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.947565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.947724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.947751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.947866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.947892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.948885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.948920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.949067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.949267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.949387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.949520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.949702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.949873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.949981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.950015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.950138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.950174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.950327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.950363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.950476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.950509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.950658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.950691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.950800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.950834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.950981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.951014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.951127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.951161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.951289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.951345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.951453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.951487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.951625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.951659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.951842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.951876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.951994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.952028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.952178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.952212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.952344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.952378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.952478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.952511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.952691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.952729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.952870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.952911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.953070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.953103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.953219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.953254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.953397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.953431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.840 [2024-11-20 07:27:51.953532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.840 [2024-11-20 07:27:51.953566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.840 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.953672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.953705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.953889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.953999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.954033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.954174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.954208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.954356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.954390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.954508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.954544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.954681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.954715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.954833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.954866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.954982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.955016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.955162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.955212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.955370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.955407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.955514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.955547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.955687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.955721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.955836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.955870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.956042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.956189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.956353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.956517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.956662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.956823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.956970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.957118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.957301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.957449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.957642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.957787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.957938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.957971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.958101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.958135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.958257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.958292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.958439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.958646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.958702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.958847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.958889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.959027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.959069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.959235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.959278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.959466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.959501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.959610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.959780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.959825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.841 qpair failed and we were unable to recover it. 00:25:48.841 [2024-11-20 07:27:51.959929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.841 [2024-11-20 07:27:51.959963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.960143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.960178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.960317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.960363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.960480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.960515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.960700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.960758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.960927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.960969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.961108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.961156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.961296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.961336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.961505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.961540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.961713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.961766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.961943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.961977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.962114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.962165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.962368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.962405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.962514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.962548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.963773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.963808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.963950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.964957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.964999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.965957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.965987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.966147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.966249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.966279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.966414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.966444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.966532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.966562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.966688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.966847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.966877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.967012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.842 [2024-11-20 07:27:51.967052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.842 qpair failed and we were unable to recover it. 00:25:48.842 [2024-11-20 07:27:51.967148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.967179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.967283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.967335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.967487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.967517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.967672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.967702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.967795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.967825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.967978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.968008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.968160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.968191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.968292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.968335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.969138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.969171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.969300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.969339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.970191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.970224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.970354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.970384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.971205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.971239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.971375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.971404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.971535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.971565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.971674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.971705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.971855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.971885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.972905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.972934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.843 [2024-11-20 07:27:51.973951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.843 [2024-11-20 07:27:51.973981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.843 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.974112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.974152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.974273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.974313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.974414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.974444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.974567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.974617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.974748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.974801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.974922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.974974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.975122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.975253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.975401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.975559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.975692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.975821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.975987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.976969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.976999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.977134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.977163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.977270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.977299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.977430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.977461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.977562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.977597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.977721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.977750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.977879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.977910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.978881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.978910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.979039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.979069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.979184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.979213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.979300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.979334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.979457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.979486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.844 [2024-11-20 07:27:51.979579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.844 [2024-11-20 07:27:51.979609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.844 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.979761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.979790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.979909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.979939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.980063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.980222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.980389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.980541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.980686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.980996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.981130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.981269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.981420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.981552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.981727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.981855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.981885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.982009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.982039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.982176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.982221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.982370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.982400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.982494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.982526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.982685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.982738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.982917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.982972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.983084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.983131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.983245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.983271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.983387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.983416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.983527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.983555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.983706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.983756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.983907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.983937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.984866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.984896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.985029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.845 [2024-11-20 07:27:51.985059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.845 qpair failed and we were unable to recover it. 00:25:48.845 [2024-11-20 07:27:51.985197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.985224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.985313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.985341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.985429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.985475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.985607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.985637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.985754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.985783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.985912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.985941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.986040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.986071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.986212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.986238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.986379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.986418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.986575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.986710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.986746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.986879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.986908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.987962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.987988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.988919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.988946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.989914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.989941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.990060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.990088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.990179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.990205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.990321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.990348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.990440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.990470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.846 qpair failed and we were unable to recover it. 00:25:48.846 [2024-11-20 07:27:51.990594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.846 [2024-11-20 07:27:51.990621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.990697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.990724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.990863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.990888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.990978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.991963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.991989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.992096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.992123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.992225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.992252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.992367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.992407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.992527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.992556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.992658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.992699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.992864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.992914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.993151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.993331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.993467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.993577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.993709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.993846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.993974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.994911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.994938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.995038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.995066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.995159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.995187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.995312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.995340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.995459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.847 [2024-11-20 07:27:51.995486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.847 qpair failed and we were unable to recover it. 00:25:48.847 [2024-11-20 07:27:51.995617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.995645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.995760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.995787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.995937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.995964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.996922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.996948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.997942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.997970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.998931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.998958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.999053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.999081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.999166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.999193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.999699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.999730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.999826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.999853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:51.999940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:51.999968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.848 [2024-11-20 07:27:52.000084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.848 [2024-11-20 07:27:52.000111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.848 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.000252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.000371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.000487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.000664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.000788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.000909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.000995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.001886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.001914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.002912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.002941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.003862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.003888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.004013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.004050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.004175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.004924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.005083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.005112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.005237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.005269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.005409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.005436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.005518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.005545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.005675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.005706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.849 [2024-11-20 07:27:52.005837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.849 [2024-11-20 07:27:52.005868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.849 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.005971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.006892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.006925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.007093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.007248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.007409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.007550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.007675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.007844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.007976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.008098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.008251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.008442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.008580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.008698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.008861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.008909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.009903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.009930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.010064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.010181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.010327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.010443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.010560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.850 [2024-11-20 07:27:52.010679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.850 qpair failed and we were unable to recover it. 00:25:48.850 [2024-11-20 07:27:52.010761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.010789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.010879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.010905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.010997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.011138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.011288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.011405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.011537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.011699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.011887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.011932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.012924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.012952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.013900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.013927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.014913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.014941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.015041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.015068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.015159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.015187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.015298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.015331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.015461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.015544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.015571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.851 [2024-11-20 07:27:52.015694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.851 [2024-11-20 07:27:52.015721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.851 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.015811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.015838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.015929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.015958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.016963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.017895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.017922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.018912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.018938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.019889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.019985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.020011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.020162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.020189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.020271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.020299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.852 qpair failed and we were unable to recover it. 00:25:48.852 [2024-11-20 07:27:52.020399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.852 [2024-11-20 07:27:52.020426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.020515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.020542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.020634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.020661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.020743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.020770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.020881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.020907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.020989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.021109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.021215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.021364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.021475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.021602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.021724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.021751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.022532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.022564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.022722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.022749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.022870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.022896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.022987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.023109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.023287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.023439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.023642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.023819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.023946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.023973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.024954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.024980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.025126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.025219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.025246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.025381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.025422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.025529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.025557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.853 qpair failed and we were unable to recover it. 00:25:48.853 [2024-11-20 07:27:52.025687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.853 [2024-11-20 07:27:52.025725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.025829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.025855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.025974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.026969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.026995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.027961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.027988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.028916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.028995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.029886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.029914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.030006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.030034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.030109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.030134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.030221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.030249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.030358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.030385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.030477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.030505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.854 qpair failed and we were unable to recover it. 00:25:48.854 [2024-11-20 07:27:52.030593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.854 [2024-11-20 07:27:52.030619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.030748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.030774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.030864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.030897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.030980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.031867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.031893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.032952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.033927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.034943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.034970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.035088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.035117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.035202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.855 [2024-11-20 07:27:52.035229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.855 qpair failed and we were unable to recover it. 00:25:48.855 [2024-11-20 07:27:52.035323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.035351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.035434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.035461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.035549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.035576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.035693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.035720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.035808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.035835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.035950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.035978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.036906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.036932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.037966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.037992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.038875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.038902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.039852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.039973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.040000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.040169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.856 [2024-11-20 07:27:52.040229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.856 qpair failed and we were unable to recover it. 00:25:48.856 [2024-11-20 07:27:52.040339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.040367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.040457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.040483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.040576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.040602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.040691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.040733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.040911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.040940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.041105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.041148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.041274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.041361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.041452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.041481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.041578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.041605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.041734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.041782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.041900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.041947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.042099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.042225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.042375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.042516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.042664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.042831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.042977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.043891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.043917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.044826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.044974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.045002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.857 [2024-11-20 07:27:52.045148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.857 [2024-11-20 07:27:52.045176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.857 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.045272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.045300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.045411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.045436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.045530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.045556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.045695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.045723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.045848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.045876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.046916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.046943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.047101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.047249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.047400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.047515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.047672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.047844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.047982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.048030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.048147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.048174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.048292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.048341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.048463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.048491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.048598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.048630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.048780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.048824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.048961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.049862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.049984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.050012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.050115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.050143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.050282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.050347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.050444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.050470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.858 [2024-11-20 07:27:52.050560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.858 [2024-11-20 07:27:52.050587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.858 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.050693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.050721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.050808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.050837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.050955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.050984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.051907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.051935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.052918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.052946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.053890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.053916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.054886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.054914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.055008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.055035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.055151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.055179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.055316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.055353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.859 [2024-11-20 07:27:52.055437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.859 [2024-11-20 07:27:52.055464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.859 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.055574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.055601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.055718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.055744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.055870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.055898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.055981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.056122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.056242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.056405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.056527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.056703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.056908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.056958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.057072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.057099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.057229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.057268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.057386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.057413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.057535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.057566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.057655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.057697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.057836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.057884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.058932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.058962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.059093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.059135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.059296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.059334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.059411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.059437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.059535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.059561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.059698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.059728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.059844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.059874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.060955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.060982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.860 [2024-11-20 07:27:52.061092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.860 [2024-11-20 07:27:52.061118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.860 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.061216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.061255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.061371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.061400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.061520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.061559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.061701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.061730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.061812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.061839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.061928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.061954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.062073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.062408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.062552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.062703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.062889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.062999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.063127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.063324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.063463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.063632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.063807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.063966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.063997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.064812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.064839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.065053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.065179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.065372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.065544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.065722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.065870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.065992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.066020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.066111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.066139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.066237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.066277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.066392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.066419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.066513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.066539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.066675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.861 [2024-11-20 07:27:52.066701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.861 qpair failed and we were unable to recover it. 00:25:48.861 [2024-11-20 07:27:52.066840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.066865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.066966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.066991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.067090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.067117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.067260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.067313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.067431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.067460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.067578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.067607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.067739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.067787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.067929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.067975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.068865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.068892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.069864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.069967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.070109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.070294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.070450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.070575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.070713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.070873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.070901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.071014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.071049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.071172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.071204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.071347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.071397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.071543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.071573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.071747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.071796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.862 qpair failed and we were unable to recover it. 00:25:48.862 [2024-11-20 07:27:52.071914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.862 [2024-11-20 07:27:52.071963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.072148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.072194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.072315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.072343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.072431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.072457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.072623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.072668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.072808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.072855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.072967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.073013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.073156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.073184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.073272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.073299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.073464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.073494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.073642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.073677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.073849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.073898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.074858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.074884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.075855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.075977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.076119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.076273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.076406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.076526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.076723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.076904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.076946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.077155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.077187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.077329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.077357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.077445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.077472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.077598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.077627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.077809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.863 [2024-11-20 07:27:52.077864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.863 qpair failed and we were unable to recover it. 00:25:48.863 [2024-11-20 07:27:52.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.078007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.078200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.078231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.078366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.078407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.078622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.078651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.078781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.078831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.079074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.079211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.079371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.079503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.079638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.079855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.079996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.080041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.080191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.080219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.080370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.080399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.080528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.080558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.080671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.080716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.080858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.080905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.080984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.081126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.081257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.081411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.081544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.081724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.081899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.082086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.082115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.082237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.082266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.082416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.082447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.082581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.082706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.082735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.082878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.082927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.083947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.083973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.864 qpair failed and we were unable to recover it. 00:25:48.864 [2024-11-20 07:27:52.084090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.864 [2024-11-20 07:27:52.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.084204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.084230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.084331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.084368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.084458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.084485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.084583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.084611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.084737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.084767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.084867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.084896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.085045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.085073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.085206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.085235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.085377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.085405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.085490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.085516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.085677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.085720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.085856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.085886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.086907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.086936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.087036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.087066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.087189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.087219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.087338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.087380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.087496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.087522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.087659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.087699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.087879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.087926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.088034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.088156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.088360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.088506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.088706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.088867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.088987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.089165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.089316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.089433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.089589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.089728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.865 [2024-11-20 07:27:52.089910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.865 [2024-11-20 07:27:52.089940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.865 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.090923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.090952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.091937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.091963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.092894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.092921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.093882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.093908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.094029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.094061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.094152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.094179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.094264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.094300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.094441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.094469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.094590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.094621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.866 qpair failed and we were unable to recover it. 00:25:48.866 [2024-11-20 07:27:52.094708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.866 [2024-11-20 07:27:52.094735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.094825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.094852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.094942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.094969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.095941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.095975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.096872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.096902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.097238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.097367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.097507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.097653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.097811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.097989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.098142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.098324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.098465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.098580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.098769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.098955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.098997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.099139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.099166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.099254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.099281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.099384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.099423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.099514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.099542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.099662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.099696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.099903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.867 [2024-11-20 07:27:52.100053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.867 [2024-11-20 07:27:52.100106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.867 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.100237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.100263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.100406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.100447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.100560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.100604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.100798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.100842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.100979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.101010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.101139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.101169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.101268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.101295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.101392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.101419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.101529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.101556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.101754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.101801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.102941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.102988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.103127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.103161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.103319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.103346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.103433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.103460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.103548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.103575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.103709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.103766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.103925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.104897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.104926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.105900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.868 qpair failed and we were unable to recover it. 00:25:48.868 [2024-11-20 07:27:52.105989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.868 [2024-11-20 07:27:52.106016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.106953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.106981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.107933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.107959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.108948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.108982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.109148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.109183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.109337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.109377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.109529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.109703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.109732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.109839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.109866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.110048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.110074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.110259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.110318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.110462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.110501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.110612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.110658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.110775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.110820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.110943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.110972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.111105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.111138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.111275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.111327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.869 [2024-11-20 07:27:52.111429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.869 [2024-11-20 07:27:52.111458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.869 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.111544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.111571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.111688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.111714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.111806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.111832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.111923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.111952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.112862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.112891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.113090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.113125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.113245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.113274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.113380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.113408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.113542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.113569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.113687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.113736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.113890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.113936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.114863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.114910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.115094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.115213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.115239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.115376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.115416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.115528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.115558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.115721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.115752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.115884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.115928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.116042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.116090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.116229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.116256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.116379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.116407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.116505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.116532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.116678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.116708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.116848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.116883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.117024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.117062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.870 [2024-11-20 07:27:52.117223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.870 [2024-11-20 07:27:52.117263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.870 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.117376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.117416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.117523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.117563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.117664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.117692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.117832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.117867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.118952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.118999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.119885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.119928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.120851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.120880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.121875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.121994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.122020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.122115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.871 [2024-11-20 07:27:52.122154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.871 qpair failed and we were unable to recover it. 00:25:48.871 [2024-11-20 07:27:52.122267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.122311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.122403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.122513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.122539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.122678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.122708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.122832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.122861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.122981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.123175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.123316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.123463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.123581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.123854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.123881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.124938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.124974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.125135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.125281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.125523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.125727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.125838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.125950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.126864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.126977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.127003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.127120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.127148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.127290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.872 [2024-11-20 07:27:52.127395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.872 [2024-11-20 07:27:52.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.872 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.127508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.127535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.127653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.127680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.127799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.127825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.127932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.127959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.128863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.128890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.129863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.129889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.130918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.130944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.131963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.131990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.132111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.132137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.132268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.132322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.132429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.132457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.132606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.132634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.132732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.132759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.873 [2024-11-20 07:27:52.132847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.873 [2024-11-20 07:27:52.132897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.873 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.133893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.133921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.134932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.134958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.135866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.135973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.136925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.136951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.874 qpair failed and we were unable to recover it. 00:25:48.874 [2024-11-20 07:27:52.137909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.874 [2024-11-20 07:27:52.137935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.138942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.138976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.139084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.139120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.139261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.139287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.139406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.139437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.139527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.139553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.139671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.139707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.139851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.139900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.140010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.140053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.140219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.140247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.140361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.140389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.140491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.140521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.140681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.140730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.140868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.140918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.141926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.141955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.142118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.142168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.142260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.142287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.142450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.142490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.142587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.142633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.142810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.142839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.142948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.142991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.143141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.143175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.143333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.143377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.143493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.875 [2024-11-20 07:27:52.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.875 qpair failed and we were unable to recover it. 00:25:48.875 [2024-11-20 07:27:52.143600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.143626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.143721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.143748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.143859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.143888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.144033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.144070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.144237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.144271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.144436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.144476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.144588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.144620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.144764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.144810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.144961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.145876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.145991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.146110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.146238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.146385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.146538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.146741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.146882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.146930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.147959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.147985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.148079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.148106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.148190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.148217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.148313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.148340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.876 qpair failed and we were unable to recover it. 00:25:48.876 [2024-11-20 07:27:52.148466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.876 [2024-11-20 07:27:52.148493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.148572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.148599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.148700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.148727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.148843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.148869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.148948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.148975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.149958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.149985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.150066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.150093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.150180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.150206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.150317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.150344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.150464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.150508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.150665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.150702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.150825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.150876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.151033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.151068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.151214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.151243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.151354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.151382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.151507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.151544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.151742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.151868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.151915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.152008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.152042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.152185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.152220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.152375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.152415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.152558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.152604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.152789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.152823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.152955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.153885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.153911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.154013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.154040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.877 qpair failed and we were unable to recover it. 00:25:48.877 [2024-11-20 07:27:52.154120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.877 [2024-11-20 07:27:52.154147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.154273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.154323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.154414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.154442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.154528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.154572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.154677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.154713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.154883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.154930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.155053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.155102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.155223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.155251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.155384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.155430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.155635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.155681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.155827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.155874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.155959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.155985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.156103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.156130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.156230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.156269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.156377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.156406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.156494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.156522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.156628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.156678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.156870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.156919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.157011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.157042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.157148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.157177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.157285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.157331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.157486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.157518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.157667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.157704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.157851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.157887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.158946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.158994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.159120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.159281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.159429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.159545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.159730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.159989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.160016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.160132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.160159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.878 [2024-11-20 07:27:52.160254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.878 [2024-11-20 07:27:52.160280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.878 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.160383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.160412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.160518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.160558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.160645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.160673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.160781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.160807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.160890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.160916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.161941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.161978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.162121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.162150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.162260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.162288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.162419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.162465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.162563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.162593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.162729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.162776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.162893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.162919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.163878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.163995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.164874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.164988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.165016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.165108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.165148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.165281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.165331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.165429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.165468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.165561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.165588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.165735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.879 [2024-11-20 07:27:52.165761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.879 qpair failed and we were unable to recover it. 00:25:48.879 [2024-11-20 07:27:52.165844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.165870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.165958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.165984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.166951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.166986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.167149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.167199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.167371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.167400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.167494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.167523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.167634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.167685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.167785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.167831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.167977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.168852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.168882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.169841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.169888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.170020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.170067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.170150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.170177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.170272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.170319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.170441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.170485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.170599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.170630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.170813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.170847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.171068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.171102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.171242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.171278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.880 qpair failed and we were unable to recover it. 00:25:48.880 [2024-11-20 07:27:52.171446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.880 [2024-11-20 07:27:52.171498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.171587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.171633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.171781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.171827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.171958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.172961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.172987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.173874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.174924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.174950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.175886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.175912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.176040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.176209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.176350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.176499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.176680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.176865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.176977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.177004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.177122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.177149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.177298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.177373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.881 [2024-11-20 07:27:52.177479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.881 [2024-11-20 07:27:52.177512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.881 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.177626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.177658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.177818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.177851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.177952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.177985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.178134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.178183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.178331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.178358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.178452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.178480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.178612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.178659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.178752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.178785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.178978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.179883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.179910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.180932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.180979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.181110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.181157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.181272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.181315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.181417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.181450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.181533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.181560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.181673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.181718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.181845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.181874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.182874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.182987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.183014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.183142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.183182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.183276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.183315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.183418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.183447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.882 qpair failed and we were unable to recover it. 00:25:48.882 [2024-11-20 07:27:52.183532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.882 [2024-11-20 07:27:52.183559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.183701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.183727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.183851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.183883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.183984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.184016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.184169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.184231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.184383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.184411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.184515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.184544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.184688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.184733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.184843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.184888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.184986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.185930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.185957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.186935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.186968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.187084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.187127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.187286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.187339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.187458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.187485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.187575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.187603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.187760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.187806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.187905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.187937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.883 [2024-11-20 07:27:52.188926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.883 [2024-11-20 07:27:52.188971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.883 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.189147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.189289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.189426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.189551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.189719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.189875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.189996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.190951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.191924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.191998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.192960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.192991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.193166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.193327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.193453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.193584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.193738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.193884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.193993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.194020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.194162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.194203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.194296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.194330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.884 [2024-11-20 07:27:52.194420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.884 [2024-11-20 07:27:52.194448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.884 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.194561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.194590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.194694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.194723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.194852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.194882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.195875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.195902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.196911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.196994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.197843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.197980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.198009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.198108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.885 [2024-11-20 07:27:52.198137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.885 qpair failed and we were unable to recover it. 00:25:48.885 [2024-11-20 07:27:52.198230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.198260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.198406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.198435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.198562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.198592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.198703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.198730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.198823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.198850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.198985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.199971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.199998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.200961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.200992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.201129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.201179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.201321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.201348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.201443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.201472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.201621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.201665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.201855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.201903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.201992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.202021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.202159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.202281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.202315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.202463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.202495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.202626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.202823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.202853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.203917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.203965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.204880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.204906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.205926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.886 [2024-11-20 07:27:52.205955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.886 qpair failed and we were unable to recover it. 00:25:48.886 [2024-11-20 07:27:52.206049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.206924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.206950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.207893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.207922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.208844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.208870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.209933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.210940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.210969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.211178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.211298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.211427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.211581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.211743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.211878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.211970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.212120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.212282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.212431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.212547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.212710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.212881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.212926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.213015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.213041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.213129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.213156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.887 qpair failed and we were unable to recover it. 00:25:48.887 [2024-11-20 07:27:52.213274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.887 [2024-11-20 07:27:52.213307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.213423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.213450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.213541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.213568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.213653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.213679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.213776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.213818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.213930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.213960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.214115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.214247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.214403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.214571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.214721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.214890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.214998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.215894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.215920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.216058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.216086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.216209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.216238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.216369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.216425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.216544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.216575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.216732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.216761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.216902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.216945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.217852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.217982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.218126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.218265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.218395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.218551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.218757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.218907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.218937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.219868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.219895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.220006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.888 [2024-11-20 07:27:52.220031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.888 qpair failed and we were unable to recover it. 00:25:48.888 [2024-11-20 07:27:52.220159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.220185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.220317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.220344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.220424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.220583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.220612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.220725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.220755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.220874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.220903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.221868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.221897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.222882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.222911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.223072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.223260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.223385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.223493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.223660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.223847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.223962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.224169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.224314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.224467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.224692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.224891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.224939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.225899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.225989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.226967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.226998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.227118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.227147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.227272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.227307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.227444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.227470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.227583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.227613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.227766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.227796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.227944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.227973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.228092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.228122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.228255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.228299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.228434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.228462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.889 [2024-11-20 07:27:52.228577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.889 [2024-11-20 07:27:52.228604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.889 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.228733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.228762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.228860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.228889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.228982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.229861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.229985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.230137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.230268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.230392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.230574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.230710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.230896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.230926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.231921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.231966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.232910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.233118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.233250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.233374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.233520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.233698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.233820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.233978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.234024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:48.890 [2024-11-20 07:27:52.234146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.890 [2024-11-20 07:27:52.234175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:48.890 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.234294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.234345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.234450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.234479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.234570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.234599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.234698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.234727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.234829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.234859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.234945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.235922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.235966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.236080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.236106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.236191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.236353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.236379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.236504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.236535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.184 qpair failed and we were unable to recover it. 00:25:49.184 [2024-11-20 07:27:52.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.184 [2024-11-20 07:27:52.236679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.236786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.236817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.236944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.236974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.237911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.237942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.238123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.238277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.238424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.238586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.238862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.238979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.239857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.240199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.240327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.240466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.240644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.240811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.240929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.240955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.241883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.241909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.185 [2024-11-20 07:27:52.242011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.185 [2024-11-20 07:27:52.242050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.185 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.242150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.242178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.242297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.242352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.242477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.242508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.242637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.242667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.242797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.242827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.242950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.242980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.243943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.243972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.244122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.244237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.244280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.244437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.244466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.244554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.244581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.244704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.244733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.244888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.244915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.245881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.245986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.246164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.246315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.246463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.246587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.246708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.246857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.246899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.247004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.247032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.247148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.247182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.247278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.247311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.247400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.186 [2024-11-20 07:27:52.247426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.186 qpair failed and we were unable to recover it. 00:25:49.186 [2024-11-20 07:27:52.247511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.247537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.247681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.247709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.247826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.247854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.247952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.247980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.248128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.248159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.248282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.248335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.248433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.248550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.248592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.248741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.248769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.248902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.248932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.249969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.249995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.250882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.250970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.251910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.251994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.252021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.252110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.252137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.252273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.252301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.252397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.252424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.252557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.187 [2024-11-20 07:27:52.252584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.187 qpair failed and we were unable to recover it. 00:25:49.187 [2024-11-20 07:27:52.252709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.252740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.252914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.252960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.253907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.253934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.254931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.254959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.255925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.255951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.256954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.256981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.257078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.257106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.257215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.257242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.257325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.257353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.257458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.257501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.257624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.257651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.188 qpair failed and we were unable to recover it. 00:25:49.188 [2024-11-20 07:27:52.257791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.188 [2024-11-20 07:27:52.257833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.257958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.258922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.258950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.259945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.259973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.260945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.260972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.261880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.261974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.262003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.189 [2024-11-20 07:27:52.262105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.189 [2024-11-20 07:27:52.262132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.189 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.262268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.262407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.262548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.262664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.262775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.262915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.262994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.263853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.263977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.264917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.264945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.265902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.265928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.266921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.190 qpair failed and we were unable to recover it. 00:25:49.190 [2024-11-20 07:27:52.266994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.190 [2024-11-20 07:27:52.267019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.267911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.267938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.268886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.268919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.269967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.269993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.270130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.270170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.270266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.270296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.270447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.270475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.270610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.270637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.270719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.270746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.270865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.270892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.191 [2024-11-20 07:27:52.271970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.191 [2024-11-20 07:27:52.271997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.191 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.272890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.272917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.273856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.273975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.274956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.274982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.275909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.275935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.276110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.276232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.276350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.276473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.276612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.192 [2024-11-20 07:27:52.276723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.192 qpair failed and we were unable to recover it. 00:25:49.192 [2024-11-20 07:27:52.276840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.276868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.276951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.276978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.277880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.277995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.278909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.278994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.279967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.279995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.193 [2024-11-20 07:27:52.280841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.193 [2024-11-20 07:27:52.280868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.193 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.280949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.280975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.281875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.281986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.282892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.282988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.283926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.283953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.284922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.284950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.285033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.285059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.285139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.285166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.285253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.285279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.285398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.285425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.285507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.194 [2024-11-20 07:27:52.285656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.194 [2024-11-20 07:27:52.285683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.194 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.285800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.285827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.285939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.285966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.286872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.286899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.287864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.287892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.288920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.288947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.289842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.289987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.290013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.290161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.290188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.290295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.290329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.290418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.290444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.290583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.290609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.290705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.195 [2024-11-20 07:27:52.290731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.195 qpair failed and we were unable to recover it. 00:25:49.195 [2024-11-20 07:27:52.290845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.290985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.291896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.291922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.292857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.292884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.293922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.293948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.294943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.294969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.295079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.295105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.295186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.295212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.295319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.295359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.295457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.295484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.295569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.295595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.196 [2024-11-20 07:27:52.295706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.196 [2024-11-20 07:27:52.295732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.196 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.295872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.295899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.295983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.296919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.296945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.297912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.297938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.298863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.298890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.299907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.299932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.300043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.300069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.300203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.300232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.300340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.300379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.300499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.300527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.300640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.197 [2024-11-20 07:27:52.300667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.197 qpair failed and we were unable to recover it. 00:25:49.197 [2024-11-20 07:27:52.300786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.300812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.300896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.300923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.301920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.301947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.302055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.302082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.302193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.302219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.302329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.302356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.302465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.302505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.302551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b39f30 (9): Bad file descriptor 00:25:49.198 [2024-11-20 07:27:52.302742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.302781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.302876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.302905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.303872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.303962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.198 [2024-11-20 07:27:52.304789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.198 [2024-11-20 07:27:52.304818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.198 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.304899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.304925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.305915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.305944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.306968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.306995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.307926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.307953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.308954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.308982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.199 [2024-11-20 07:27:52.309922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.199 [2024-11-20 07:27:52.309948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.199 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.310960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.310986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.311909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.311936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.312963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.312989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.313964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.313991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.314113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.314232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.314373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.314481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.314595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.200 [2024-11-20 07:27:52.314746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.200 qpair failed and we were unable to recover it. 00:25:49.200 [2024-11-20 07:27:52.314855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.314883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.314967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.314999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.315896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.315925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.316952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.316979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.317970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.317996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.318885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.318975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.319006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.319108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.319147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.319272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.319300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.319395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.319422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.201 qpair failed and we were unable to recover it. 00:25:49.201 [2024-11-20 07:27:52.319530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.201 [2024-11-20 07:27:52.319556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.319666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.319837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.319864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.319951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.319980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.320926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.320952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.321950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.321976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.322971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.322997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.323942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.323970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.324078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.324104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.202 [2024-11-20 07:27:52.324191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.202 [2024-11-20 07:27:52.324219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.202 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.324324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.324367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.324489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.324518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.324638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.324664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.324758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.324784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.324890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.324916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.325924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.325950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.326868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.326989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.327952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.327979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.328103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.328142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.328250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.328290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.328398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.328427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.203 qpair failed and we were unable to recover it. 00:25:49.203 [2024-11-20 07:27:52.328522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.203 [2024-11-20 07:27:52.328550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.328686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.328712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.328830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.328858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.328944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.328972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.329889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.329917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.330921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.330949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.331963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.331992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.332964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.332990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.333073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.333100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.333224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.204 [2024-11-20 07:27:52.333251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.204 qpair failed and we were unable to recover it. 00:25:49.204 [2024-11-20 07:27:52.333367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.333396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.333483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.333511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.333617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.333656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.333741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.333768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.333858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.333884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.333973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.334905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.334991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.335930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.335957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.336921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.336947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.337960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.337987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.205 [2024-11-20 07:27:52.338076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.205 [2024-11-20 07:27:52.338103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.205 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.338222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.338248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.338352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.338378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.338489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.338515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.338646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.338777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.338803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.338901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.338929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.339971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.339997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.340960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.340987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.341911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.341938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.342965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.342992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.343080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.206 [2024-11-20 07:27:52.343108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.206 qpair failed and we were unable to recover it. 00:25:49.206 [2024-11-20 07:27:52.343196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.343228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.343347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.343386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.343506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.343533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.343663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.343689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.343777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.343803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.343888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.343914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.344966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.344993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.345934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.345961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.346947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.346974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.347078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.347117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.347236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.347264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.347369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.347399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.347477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.347503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.347582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.347609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.207 qpair failed and we were unable to recover it. 00:25:49.207 [2024-11-20 07:27:52.347713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.207 [2024-11-20 07:27:52.347740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.347880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.347994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.348936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.348961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.349946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.349973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.350935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.350963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.351922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.351948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.352037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.352069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.352149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.208 [2024-11-20 07:27:52.352175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.208 qpair failed and we were unable to recover it. 00:25:49.208 [2024-11-20 07:27:52.352271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.352317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.352406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.352435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.352547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.352575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.352725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.352752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.352868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.352894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.353919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.353948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.354873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.354900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.355907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.355933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.356070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.356180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.356329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.356443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.356554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.209 [2024-11-20 07:27:52.356658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.209 qpair failed and we were unable to recover it. 00:25:49.209 [2024-11-20 07:27:52.356753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.356781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.356863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.356890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.356977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.357944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.357971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.358903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.358929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.359922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.359948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.360893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.360981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.361010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.361105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.361132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.361225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.361252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.361381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.210 [2024-11-20 07:27:52.361408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.210 qpair failed and we were unable to recover it. 00:25:49.210 [2024-11-20 07:27:52.361494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.361520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.361601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.361627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.361739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.361765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.361842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.361868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.361956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.361983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.362894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.362990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.363912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.363938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.364902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.364991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.365895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.365921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.366008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.366036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.211 [2024-11-20 07:27:52.366175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.211 [2024-11-20 07:27:52.366215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.211 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.366325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.366365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.366467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.366496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.366616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.366643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.366729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.366756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.366840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.366866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.366950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.366977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.367882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.367908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.368955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.368981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.369961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.369988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.370127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.370278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.370324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.212 [2024-11-20 07:27:52.370453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.212 [2024-11-20 07:27:52.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.212 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.370564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.370591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.370688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.370715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.370836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.370967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.370993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.371882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.371909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.372940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.372971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.373958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.373984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.374890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.374915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.375030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.375056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.375138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.375165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.213 qpair failed and we were unable to recover it. 00:25:49.213 [2024-11-20 07:27:52.375287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.213 [2024-11-20 07:27:52.375334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.375444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.375483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.375586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.375613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.375726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.375752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.375835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.375862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.375950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.375976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.376888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.376918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.377931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.377957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.378857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.378989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.379947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.379980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.380095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.214 [2024-11-20 07:27:52.380122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.214 qpair failed and we were unable to recover it. 00:25:49.214 [2024-11-20 07:27:52.380212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.380241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.380341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.380369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.380455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.380481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.380619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.380646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.380740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.380767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.380881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.380908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.380999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.381881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.381999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.382947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.382974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.383110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.383282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.383442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.383586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.383729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.383894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.384944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.384971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.385082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.385120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.215 [2024-11-20 07:27:52.385238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.215 [2024-11-20 07:27:52.385265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.215 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.385361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.385393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.385504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.385530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.385610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.385636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.385719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.385745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.385838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.385864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.385947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.385972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.386922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.386948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.387907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.387988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.388914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.388940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.216 [2024-11-20 07:27:52.389898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.216 qpair failed and we were unable to recover it. 00:25:49.216 [2024-11-20 07:27:52.389991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.390971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.390998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.391947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.391974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.392893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.392918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.393910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.393936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.394026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.217 [2024-11-20 07:27:52.394053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.217 qpair failed and we were unable to recover it. 00:25:49.217 [2024-11-20 07:27:52.394165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.394264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.394391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.394495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.394667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.394783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.394899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.394925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.395889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.395916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.396898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.396925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.397893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.397918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.218 [2024-11-20 07:27:52.398920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.218 [2024-11-20 07:27:52.398946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.218 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.399858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.399992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.400925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.400952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.401931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.401958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.402938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.402964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.403052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.403078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.403229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.403269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.403402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.403430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.403543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.403569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.403687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.403713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.219 [2024-11-20 07:27:52.403799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.219 [2024-11-20 07:27:52.403828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.219 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.403917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.404962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.404988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.405890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.405919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.406899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.406924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.407915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.407999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.408027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.408119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.408147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.408271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.408321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.408421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.408449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.220 [2024-11-20 07:27:52.408558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.220 [2024-11-20 07:27:52.408584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.220 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.408702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.408728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.408840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.408868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.408960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.408986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.409857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.409998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.410905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.410933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.411843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.411870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.412881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.412907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.413021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.413047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.413133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.413160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.413248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.413277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.221 [2024-11-20 07:27:52.413385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.221 [2024-11-20 07:27:52.413424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.221 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.413514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.413541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.413667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.413692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.413805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.413832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.413922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.413948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.414902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.414986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.415965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.415990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.416941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.416967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.417065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.417091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.417215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.417241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.417327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.417353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.417470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.417496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.417581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.222 [2024-11-20 07:27:52.417606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.222 qpair failed and we were unable to recover it. 00:25:49.222 [2024-11-20 07:27:52.417718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.417820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.417845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.417926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.417951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.418924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.418953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.419908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.419938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.420910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.420937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.421900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.421928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.422008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.422036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.422148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.422181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.422258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.422285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.422404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.223 [2024-11-20 07:27:52.422430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.223 qpair failed and we were unable to recover it. 00:25:49.223 [2024-11-20 07:27:52.422539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.422565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.422658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.422685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.422781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.422807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.422894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.422921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.423881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.423908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.424039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.424174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.424327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.424494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.424660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.424865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.424976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.425891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.425936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.426955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.426982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.427110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.427139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.427257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.427286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.427433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.427459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.427546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.427575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.224 [2024-11-20 07:27:52.427681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.224 [2024-11-20 07:27:52.427713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.224 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.427807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.427834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.427921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.427967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.428901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.428937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.429968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.429995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.430096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.430154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.430362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.430390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.430482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.430508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.430630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.430658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.430796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.430825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.431002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.431038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.431235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.431272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.431427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.431455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.431538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.431566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.431660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.431687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.431818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.431852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.432941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.432968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.225 [2024-11-20 07:27:52.433080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.225 [2024-11-20 07:27:52.433108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.225 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.433249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.433374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.433504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.433544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.433707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.433754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.433887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.433935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.434127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.434244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.434271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.434392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.434421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.434509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.434536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.434658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.434687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.434810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.434840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.435947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.435974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.436899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.436935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.437958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.437996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.438132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.438193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.438350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.438406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.438549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.438577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.438666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.438692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.438860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.438910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.226 [2024-11-20 07:27:52.439013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.226 [2024-11-20 07:27:52.439057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.226 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.439936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.439964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.440110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.440137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.440268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.440314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.440437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.440466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.440604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.440641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.440798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.440846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.440960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.440987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.441928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.441955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.442933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.442958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.443859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.443890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.444029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.444055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.444142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.444168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.444273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.444322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.444428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.444467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.227 qpair failed and we were unable to recover it. 00:25:49.227 [2024-11-20 07:27:52.444610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.227 [2024-11-20 07:27:52.444641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.444789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.444819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.444971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.445016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.445155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.445184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.445332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.445372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.445522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.445561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.445679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.445711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.445839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.445877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.446038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.446075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.446236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.446273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.446428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.446455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.446587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.446636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.446771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.446824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.446933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.446987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.447101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.447128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.447211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.447237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.447388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.447428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.447573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.447601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.447737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.447763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.447877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.447903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.448047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.448192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.448354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.448491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.448852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.448956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.449917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.449944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.450105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.450241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.450376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.450496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.450669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.228 [2024-11-20 07:27:52.450801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.228 qpair failed and we were unable to recover it. 00:25:49.228 [2024-11-20 07:27:52.450954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.450993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.451112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.451141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.451242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.451283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.451443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.451475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.451614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.451645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.451823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.451860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.451979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.452017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.452184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.452221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.452384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.452424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.452551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.452597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.452730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.452759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.452907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.452952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.453870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.453984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.454012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.454124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.454152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.454250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.454296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.454425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.454459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.454565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.454594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.454733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.454770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.454971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.455007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.455152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.455188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.455365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.455392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.455508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.455534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.455646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.455676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.455872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.455910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.456028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.456075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.456230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.456259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.456372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.456412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.456539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.456567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.456721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.456752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.456981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.457020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.457261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.457288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.457417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.457445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.229 [2024-11-20 07:27:52.457534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.229 [2024-11-20 07:27:52.457561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.229 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.457722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.457752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.457876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.457947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.458183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.458248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.458384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.458412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.458534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.458560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.458660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.458712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.458866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.458927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.459095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.459135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.459368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.459398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.459542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.459569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.459660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.459686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.459779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.459805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.459915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.459942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.460054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.460080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.460188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.460215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.460343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.460382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.460515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.460553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.460699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.460726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.460856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.460882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.461912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.461949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.462091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.462129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.462270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.462296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.462411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.462438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.462530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.462558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.462780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.462817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.462949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.463002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.463155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.463193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.463317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.463344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.463451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.463477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.463564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.463590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.463733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.463771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.464005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.464043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.464202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.464239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.464405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.230 [2024-11-20 07:27:52.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.230 qpair failed and we were unable to recover it. 00:25:49.230 [2024-11-20 07:27:52.464547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.464586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.464711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.464738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.464890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.464941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.465041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.465067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.465254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.465298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.465418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.465445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.465560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.465586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.465697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.465724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.465878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.465916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.466067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.466122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.466270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.466299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.466422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.466449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.466560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.466588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.466728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.466772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.466870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.466899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.467951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.467980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.468160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.468207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.468342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.468502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.468528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.468627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.468667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.468876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.468904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.469004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.469033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.469153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.469182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.231 qpair failed and we were unable to recover it. 00:25:49.231 [2024-11-20 07:27:52.469346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.231 [2024-11-20 07:27:52.469386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.469492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.469531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.469653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.469705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.469848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.469965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.469991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.470958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.470989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.471894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.471923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.472844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.472871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.473107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.473222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.473381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.473526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.473677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.473833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.473969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.474011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.474144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.474180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.474347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.474374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.474467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.474493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.474655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.474697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.474905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.232 [2024-11-20 07:27:52.474943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.232 qpair failed and we were unable to recover it. 00:25:49.232 [2024-11-20 07:27:52.475120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.475158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.475395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.475421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.475550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.475600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.475728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.475769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.475897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.475947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.476110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.476150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.476310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.476365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.476478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.476504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.476615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.476641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.476847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.476886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.477041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.477084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.477246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.477272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.477395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.477422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.477507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.477533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.477725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.477751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.477864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.477917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.478040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.478088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.478243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.478269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.478370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.478396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.478487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.478514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.478599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.478641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.478794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.478831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.479914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.479953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.480081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.480134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.480334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.480361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.480554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.480580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.480756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.480782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.480899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.480925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.481037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.481064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.481214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.481258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.481445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.481472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.233 [2024-11-20 07:27:52.481560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.233 [2024-11-20 07:27:52.481604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.233 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.481743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.481769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.481901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.481942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.482062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.482118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.482274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.482300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.482423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.482450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.482551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.482590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.482735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.482761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.482852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.482878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.483909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.483938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.484063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.484105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.484229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.484258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.484433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.484461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.484576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.484602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.484697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.484724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.484849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.484878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.485072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.485101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.485220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.485250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.485395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.485423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.485511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.485542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.485654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.485679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.485866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.485915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.486934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.486960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.487098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.487127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.487241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.487267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.487419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.234 [2024-11-20 07:27:52.487445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.234 qpair failed and we were unable to recover it. 00:25:49.234 [2024-11-20 07:27:52.487527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.487552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.487706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.487733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.487876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.487905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.488940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.488983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.489925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.489954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.490071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.490099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.490216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.490261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.490436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.490480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.490614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.490645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.490766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.490795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.490889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.490919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.491058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.491087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.491204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.491233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.491359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.491404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.491539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.491587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.491703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.491734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.491919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.491959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.492083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.492123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.492271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.492301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.492435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.492465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.492557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.492586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.235 qpair failed and we were unable to recover it. 00:25:49.235 [2024-11-20 07:27:52.492743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.235 [2024-11-20 07:27:52.492798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.492958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.492999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.493169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.493208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.493366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.493396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.493542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.493571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.493693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.493722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.493836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.493876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.494064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.494103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.494241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.494288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.494476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.494505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.494655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.494694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.494808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.494847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.494975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.495029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.495186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.495230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.495396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.495427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.495558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.495588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.495676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.495705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.495909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.495976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.496188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.496258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.496431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.496461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.496579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.496609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.496739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.496769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.496918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.496958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.497102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.497169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.497337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.497367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.497514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.497543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.497709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.497749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.497868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.497917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.498115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.498155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.498272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.498323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.498437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.498467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.498620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.498670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.498804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.498845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.499016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.499056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.499219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.499259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.499457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.499488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.499640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.499670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.499844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.236 [2024-11-20 07:27:52.499885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.236 qpair failed and we were unable to recover it. 00:25:49.236 [2024-11-20 07:27:52.500004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.500044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.500252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.500292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.500437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.500468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.500567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.500619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.500810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.500840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.500966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.501018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.501178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.501219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.501375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.501406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.501497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.501527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.501675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.501932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.501979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.502170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.502237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.502425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.502456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.502551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.502580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.502699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.502729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.502860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.502900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.503118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.503158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.503270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.503318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.503466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.503496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.503583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.503612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.503735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.503765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.503976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.504043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.504297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.504378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.504537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.504566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.504700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.504741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.504941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.504981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.505133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.505174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.505342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.505391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.505515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.505545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.505714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.505755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.505880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.505922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.506040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.506070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.506211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.506253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.506414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.506444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.506564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.506595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.506712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.506760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.506893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.506932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.507144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.237 [2024-11-20 07:27:52.507184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.237 qpair failed and we were unable to recover it. 00:25:49.237 [2024-11-20 07:27:52.507361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.507392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.507488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.507517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.507666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.507696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.507796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.507826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.507957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.507986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.508103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.508145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.508351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.508382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.508476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.508506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.508608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.508638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.508727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.508756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.508883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.508913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.509029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.509072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.509270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.509344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.509438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.509468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.509602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.509632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.509752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.509807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.509965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.510007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.510179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.510221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.510366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.510418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.510507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.510537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.510668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.510699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.510814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.510843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.510999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.511042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.511173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.511232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.511415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.511446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.511553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.511583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.511712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.511742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.511866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.511915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.512103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.512146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.512300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.512346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.512496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.512652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.512694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.512823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.512852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.512952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.512983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.513163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.513230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.513545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.513617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.513901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.513968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.238 [2024-11-20 07:27:52.514207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.238 [2024-11-20 07:27:52.514273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.238 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.514555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.514622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.514927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.514995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.515224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.515267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.515454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.515497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.515670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.515712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.515879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.515921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.516088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.516130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.516272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.516324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.516476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.516518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.516659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.516703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.516906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.516948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.517159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.517203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.517351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.517396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.517570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.517615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.517817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.517869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.518018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.518065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.518229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.518328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.518552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.518594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.518790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.518832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.518996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.519040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.519229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.519271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.519433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.519500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.519720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.519763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.519936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.520002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.520235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.520357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.520588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.520655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.520874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.520941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.521173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.521242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.521496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.521564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.521800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.521842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.522102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.522168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.522357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.522401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.522562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.522606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.522774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.522819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.522958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.523003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.523168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.523213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.523381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.523426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.239 qpair failed and we were unable to recover it. 00:25:49.239 [2024-11-20 07:27:52.523573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.239 [2024-11-20 07:27:52.523619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.523797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.523842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.524045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.524089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.524255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.524300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.524486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.524532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.524709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.524753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.524925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.524969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.525138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.525182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.525369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.525415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.525592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.525637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.525773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.525818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.525988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.526032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.526210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.526255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.526428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.526472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.526620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.526664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.526820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.526864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.526997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.527043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.527178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.527230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.527384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.527429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.527574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.527618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.527833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.528037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.528081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.528248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.528292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.528496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.528541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.528675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.528719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.528889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.528935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.529157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.529432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.529477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.529625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.529671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.529821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.529866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.530043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.530088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.530314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.530361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.530503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.530547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.530749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.530793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.240 qpair failed and we were unable to recover it. 00:25:49.240 [2024-11-20 07:27:52.530919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.240 [2024-11-20 07:27:52.530982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.531170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.531217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.531403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.531451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.531665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.531713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.531849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.531895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.532072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.532119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.532330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.532379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.532521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.532567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.532766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.532813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.533024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.533071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.533295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.533350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.533571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.533619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.533836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.533883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.534063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.534110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.534300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.534357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.534535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.534583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.534793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.534840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.534984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.535032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.535242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.535289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.535498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.535545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.535687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.535735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.535913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.535959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.536144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.536190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.536372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.536429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.536596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.536644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.536793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.536842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.537056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.537333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.537382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.537571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.537619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.537842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.537889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.538109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.538156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.538370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.538417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.538570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.538617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.538789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.538836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.539006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.539053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.539190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.539237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.539415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.539467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.539668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.241 [2024-11-20 07:27:52.539718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.241 qpair failed and we were unable to recover it. 00:25:49.241 [2024-11-20 07:27:52.539896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.539946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.540173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.540223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.540473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.540676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.540723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.540870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.540919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.541120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.541167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.541402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.541453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.541609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.541857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.541907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.542097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.542149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.542375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.542427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.542651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.542701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.542907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.542957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.543119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.543169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.543340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.543391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.543572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.543622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.543823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.543872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.544023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.544101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.544374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.544437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.544697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.544779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.544996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.545046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.545212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.545263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.545448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.545498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.545721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.545772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.545949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.545999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.546148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.546206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.546405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.546457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.546659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.546708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.546877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.546927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.547115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.547166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.547387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.547439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.547627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.547677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.547902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.547952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.548146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.548196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.548419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.548470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.548665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.548715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.548942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.548992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.549186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.242 [2024-11-20 07:27:52.549236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.242 qpair failed and we were unable to recover it. 00:25:49.242 [2024-11-20 07:27:52.549427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.549480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.549695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.549745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.549947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.549997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.550191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.550240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.550449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.550501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.550691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.550741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.550917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.550967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.551193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.551243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.551479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.551530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.551689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.551743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.551971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.552021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.552181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.552244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.552482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.552532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.552759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.552809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.553001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.553052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.553275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.553368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.553523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.553575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.553739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.553789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.553969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.554019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.554209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.554258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.554500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.554554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.554750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.554803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.554995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.555048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.555286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.555354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.555607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.555660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.555895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.555949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.556153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.556205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.556391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.556456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.556665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.556719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.556951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.557004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.557242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.557296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.557523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.557578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.557782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.557837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.558079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.558132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.558338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.558393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.558580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.558632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.558832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.558885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.559055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.243 [2024-11-20 07:27:52.559108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.243 qpair failed and we were unable to recover it. 00:25:49.243 [2024-11-20 07:27:52.559296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.559360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.559530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.559586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.559783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.559837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.560050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.560112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.560260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.560287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.560386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.560413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.560532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.560558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.560654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.560681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.560815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.560868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.561077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.561105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.561243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.561271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.561368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.561396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.561482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.561510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.561601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.561646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.561860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.561916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.562149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.562202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.562419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.562455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.562562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.562623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.562862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.562951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.563139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.563173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.563387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.563422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.563547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.563582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.563749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.563829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.563957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.563991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.564111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.564146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.564384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.564420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.564520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.564555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.564721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.564776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.565007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.565062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.565253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.565328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.565567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.565629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.565826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.565881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.566051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.566104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.566325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.566380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.566588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.566645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.566847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.566901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.567087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.567142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.244 [2024-11-20 07:27:52.567372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-11-20 07:27:52.567426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.244 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.567607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.567660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.567861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.567914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.568151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.568205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.568440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.568497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.568704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.568757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.568963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.569017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.569213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.569267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.569528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.569581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.569735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.569789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.569974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.570028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.570188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.570242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.570452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.570506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.570749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.570803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.571017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.571071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.571292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.571382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.571627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.571681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.571890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.571944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.572145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.572199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.572368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.572611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.572665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.572836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.573102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.573156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.573370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.573425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.573606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.573660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.573898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.573951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.574160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.574213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.574425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.574480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.574698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.574753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.574914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.574967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.575143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.575197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.575373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.575455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.575655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.575720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.575897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.575952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.576114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-11-20 07:27:52.576168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.245 qpair failed and we were unable to recover it. 00:25:49.245 [2024-11-20 07:27:52.576368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.576424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.576604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.576658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.576863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.577095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.577150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.577345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.577400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.577588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.577664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.577907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.577962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.578189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.578243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.578467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.578524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.578696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.578750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.579017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.579071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.579278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.579366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.579586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.579641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.579818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.246 [2024-11-20 07:27:52.579872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.246 qpair failed and we were unable to recover it. 00:25:49.246 [2024-11-20 07:27:52.580082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.580135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.580356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.580412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.580596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.580650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.580826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.580879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.581084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.581138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.581346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.581403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.581614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.581669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.581823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.581877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.582082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.582136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.582340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.582394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.582663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.582747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.583012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.583230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.583285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.583503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.583559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.583738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.583794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.583998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.584324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.584384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.584565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.584623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.584871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.584928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.585119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.585176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.585375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.585435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.585670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.585734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.585940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.585998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.586206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.586284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.586537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.586596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.586786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.586850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.587070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.522 [2024-11-20 07:27:52.587145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.522 qpair failed and we were unable to recover it. 00:25:49.522 [2024-11-20 07:27:52.587347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.587405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.587629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.587695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.587908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.587971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.588131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.588189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.588377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.588435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.588602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.588661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.588912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.588968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.589189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.589247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.589471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.589530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.589769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.589825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.590053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.590110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.590328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.590387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.590638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.590694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.590894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.590962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.591224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.591291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.591504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.591561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.591804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.591861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.592048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.592106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.592364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.592423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.592642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.592701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.592916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.592972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.593161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.593218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.593457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.593514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.593735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.593792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.593999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.594055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.594319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.594376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.594636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.594693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.594928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.594984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.595211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.595266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.595506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.523 [2024-11-20 07:27:52.595564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.523 qpair failed and we were unable to recover it. 00:25:49.523 [2024-11-20 07:27:52.595758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.595823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.596142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.596204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.596439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.596497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.596691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.596748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.597001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.597072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.597317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.597376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.597620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.597681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.597939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.597996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.598261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.598352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.598544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.598604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.598835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.598894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.599083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.599142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.599389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.599648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.599704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.599881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.599939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.600107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.600164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.600364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.600423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.600613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.600672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.600935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.600991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.601209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.601268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.601475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.601534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.601746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.601804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.602052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.602108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.602329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.602388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.602592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.602649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.602839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.602895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.603145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.603202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.603477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.603540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.603812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.603872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.604103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.604164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.604377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.604443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.604687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.604748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.524 [2024-11-20 07:27:52.604924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.524 [2024-11-20 07:27:52.604985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.524 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.605173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.605245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.605505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.605567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.605754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.605815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.606011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.606073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.606321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.606382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.606586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.606647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.606872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.606934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.607168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.607230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.607509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.607571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.607875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.607936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.608213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.608275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.608538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.608600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.608834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.608897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.609079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.609141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.609419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.609482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.609722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.609785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.610011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.610071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.610276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.610352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.610580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.610640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.610846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.610906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.611113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.611174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.611388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.611452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.611735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.611796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.612028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.612090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.612291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.612365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.612570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.612632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.612859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.612922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.613176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.613237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.613480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.613543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.613780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.613843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.525 [2024-11-20 07:27:52.614113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.525 [2024-11-20 07:27:52.614174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.525 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.614362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.614426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.614640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.614703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.614913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.614973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.615203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.615265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.615525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.615588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.615840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.615901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.616172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.616233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.616475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.616538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.616811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.616872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.617140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.617217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.617497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.617566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.617856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.617924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.618143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.618212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.618515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.618578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.618810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.618870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.619162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.619228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.619473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.619536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.619775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.619835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.620100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.620163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.620365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.620429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.620657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.620718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.620978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.621038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.621263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.621339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.621557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.621620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.621830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.621894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.622148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.622212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.622512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.622579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.622829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.622896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.623104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.623172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.623417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.526 [2024-11-20 07:27:52.623484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.526 qpair failed and we were unable to recover it. 00:25:49.526 [2024-11-20 07:27:52.623689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.623757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.624006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.624073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.624265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.624346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.624605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.624672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.624972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.625038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.625251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.625353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.625608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.625675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.625973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.626040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.626262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.626349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.626615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.626682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.626994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.627060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.627261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.627351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.627617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.627683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.627976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.628043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.628264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.628350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.628632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.628699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.628935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.629002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.629214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.629279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.629547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.629614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.629838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.629915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.630169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.630234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.630474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.630542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.630744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.630810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.527 qpair failed and we were unable to recover it. 00:25:49.527 [2024-11-20 07:27:52.631070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.527 [2024-11-20 07:27:52.631136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.631395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.631463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.631724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.631790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.631990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.632058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.632280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.632360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.632652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.632718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.632967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.633032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.633258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.633357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.633642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.633708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.633999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.634371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.634439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.634642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.634708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.634924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.634990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.635193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.635258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.635525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.635591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.635835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.635902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.636100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.636166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.636432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.636500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.636708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.636777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.637069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.637380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.637447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.637717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.637782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.638015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.638081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.638345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.638412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.638702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.638768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.639023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.639089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.639391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.639460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.639756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.639821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.640024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.640091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.640373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.640442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.640731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.528 [2024-11-20 07:27:52.640797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.528 qpair failed and we were unable to recover it. 00:25:49.528 [2024-11-20 07:27:52.641092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.641159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.641373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.641441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.641702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.641996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.642062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.642326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.642395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.642636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.642715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.642974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.643041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.643283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.643363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.643664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.643736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.643983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.644055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.644331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.644399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.644622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.644687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.644915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.644982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.645193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.645261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.645542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.645609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.645805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.645872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.646082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.646151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.646443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.646511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.646801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.646866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.647143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.647222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.647499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.647567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.647867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.647932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.648168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.648234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.648464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.648534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.648774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.648840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.649093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.649158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.649455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.649523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.649738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.649803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.650089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.650155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.650444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.650513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.650751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.650819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.651105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.529 [2024-11-20 07:27:52.651173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.529 qpair failed and we were unable to recover it. 00:25:49.529 [2024-11-20 07:27:52.651476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.651543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.651763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.651830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.652118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.652184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.652467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.652534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.652798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.652866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.653061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.653128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.653385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.653455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.653747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.653815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.654019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.654087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.654324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.654401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.654703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.654772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.654991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.655060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.655263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.655347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.655608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.655685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.655932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.655999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.656295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.656375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.656662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.656728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.657024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.657089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.657284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.657378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.657606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.657674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.657930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.657996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.658204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.658272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.658541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.658618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.658866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.658935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.659220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.659287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.659547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.659620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.659905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.659971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.660249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.660341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.660585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.660658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.660930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.660996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.661267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.661346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.530 [2024-11-20 07:27:52.661605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.530 [2024-11-20 07:27:52.661670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.530 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.661920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.661987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.662247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.662327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.662592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.662660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.662907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.662973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.663234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.663316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.663524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.663592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.663812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.663879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.664144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.664211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.664459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.664528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.664811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.664878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.665127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.665195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.665471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.665538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.665825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.665891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.666120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.666189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.666493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.666560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.666844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.666911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.667152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.667219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.667455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.667523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.667813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.667881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.668145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.668213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.668478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.668550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.668839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.668916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.669207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.669274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.669505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.669571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.669858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.669924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.670144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.670509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.670577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.670825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.670892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.671132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.671199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.671477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.671834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.671901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.672154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.672220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.672632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.531 [2024-11-20 07:27:52.672702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.531 qpair failed and we were unable to recover it. 00:25:49.531 [2024-11-20 07:27:52.672953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.673020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.673243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.673340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.673567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.673634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.673880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.673949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.674247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.674331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.674618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.674685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.674927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.674993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.675241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.675322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.675607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.675673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.675892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.675957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.676167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.676234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.676444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.676512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.676755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.676820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.677053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.677120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.677370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.677438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.677709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.677775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.678074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.678139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.678382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.678477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.678729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.678795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.679040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.679113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.679355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.679424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.679629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.679695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.679940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.680009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.680196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.680263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.680519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.680586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.680831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.680898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.681141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.681210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.681467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.681536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.532 [2024-11-20 07:27:52.681788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.532 [2024-11-20 07:27:52.681866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.532 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.682128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.682194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.682447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.682516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.682769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.682836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.683084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.683149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.683406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.683476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.683763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.683832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.684032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.684095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.684328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.684394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.684640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.684703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.684926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.684989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.685269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.685364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.685593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.685656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.685938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.686002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.686319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.686384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.686673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.686737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.686998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.687062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.687323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.687387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.687634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.687697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.687948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.688016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.688327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.688392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.688636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.688701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.688949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.689013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.689264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.689343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.689601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.689666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.689948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.690011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.690265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.690361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.690630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.533 [2024-11-20 07:27:52.690990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.533 [2024-11-20 07:27:52.691057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.533 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.691277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.691362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.691645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.691711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.692012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.692083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.692334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.692402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.692612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.692678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.692916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.692983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.693191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.693258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.693559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.693625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.693856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.693921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.694159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.694225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.694483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.694549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.694842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.694917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.695160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.695226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.695505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.695571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.695920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.696167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.696233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.696555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.696622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.696911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.696976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.697246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.697337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.697539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.697606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.697892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.697958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.698245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.698328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.698557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.698626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.698910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.698976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.699222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.699288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.699534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.699603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.699849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.699917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.700131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.700197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.700416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.700484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.700775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.700843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.701071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.701138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.701448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.701693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.701759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.702006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-11-20 07:27:52.702075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.534 qpair failed and we were unable to recover it. 00:25:49.534 [2024-11-20 07:27:52.702345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.702412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.702645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.702713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.702951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.703018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.703261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.703343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.703556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.703622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.703810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.703875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.704119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.704186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.704445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.704514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.704813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.704879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.705169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.705235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.705461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.705528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.705810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.705876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.706074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.706142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.706380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.706449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.706654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.706722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.706970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.707037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.707299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.707380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.707635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.707722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.707966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.708032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.708315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.708405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.708710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.708776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.709013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.709080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.709354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.709422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.709641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.709709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.709936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.710002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.710259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.710338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.710646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.710714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.711012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.711078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.711278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.711359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.711623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.711689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.711985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.712050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.712352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.535 [2024-11-20 07:27:52.712422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.535 qpair failed and we were unable to recover it. 00:25:49.535 [2024-11-20 07:27:52.712650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.712716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.712924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.712992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.713293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.713373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.713658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.713725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.714018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.714085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.714342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.714409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.714629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.714695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.714991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.715057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.715348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.715415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.715702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.715768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.716067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.716428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.716494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.716753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.716822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.717085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.717151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.717364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.717431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.717685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.717750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.718043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.718109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.718368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.718435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.718725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.718790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.719044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.719111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.719367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.719435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.719720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.719786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.720078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.720144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.720407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.720475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.720725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.720791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.721044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.721120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.721421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.721489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.721727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.721792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.722031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.536 [2024-11-20 07:27:52.722096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.536 qpair failed and we were unable to recover it. 00:25:49.536 [2024-11-20 07:27:52.722390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.722458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.722648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.722714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.722963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.723027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.723320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.723387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.723629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.723697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.723986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.724051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.724342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.724409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.724622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.724693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.724980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.725046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.725259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.725353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.725642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.725709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.725997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.726064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.726348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.726415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.726715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.726781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.727032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.727099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.727336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.727406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.727661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.727728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.728028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.728093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.728375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.728442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.728692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.728759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.729048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.729113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.729396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.729463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.729770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.729836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.730127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.730193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.730407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.730474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.730735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.730804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.731097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.731163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.731450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.731517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.731760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.731827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.732088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.732152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.732432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.732500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.732699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.732765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.537 qpair failed and we were unable to recover it. 00:25:49.537 [2024-11-20 07:27:52.732986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.537 [2024-11-20 07:27:52.733054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.733352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.733419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.733678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.733744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.733975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.734041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.734300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.734393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.734683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.734750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.734994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.735062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.735328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.735395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.735689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.735754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.736005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.736071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.736293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.736374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.736627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.736693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.736977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.737043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.737335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.737401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.737650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.737717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.737959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.738027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.738331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.738399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.738661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.738727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.738967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.739032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.739276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.739358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.739660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.739727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.740014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.740080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.740330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.740397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.740663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.740730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.740948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.741012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.741335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.741402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.741681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.741748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.741997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.742063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.538 [2024-11-20 07:27:52.742325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.538 [2024-11-20 07:27:52.742393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.538 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.742641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.742708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.742960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.743026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.743287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.743371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.743637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.743706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.743933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.744000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.744226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.744291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.744575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.744642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.744897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.744963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.745247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.745326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.745616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.745681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.745934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.746002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.746217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.746285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.746591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.746657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.746875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.746942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.747245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.747353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.747635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.747711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.747964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.748031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.748285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.748367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.748625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.748689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.748972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.749037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.749277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.749360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.749657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.749722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.749973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.750039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.750301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.750380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.750679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.750744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.751031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.751097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.751383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.751451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.751710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.751775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.752028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.752096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.752361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.752431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.752694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.752761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.753021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.753088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.753341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.753412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.753710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.539 [2024-11-20 07:27:52.753775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.539 qpair failed and we were unable to recover it. 00:25:49.539 [2024-11-20 07:27:52.754021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.754090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.754385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.754453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.754744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.754809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.755036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.755101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.755420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.755672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.755740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.756028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.756093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.756294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.756374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.756650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.756717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.757011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.757076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.757356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.757423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.757723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.757788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.758074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.758140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.758428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.758495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.758754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.758818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.759100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.759167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.759425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.759492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.759744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.759809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.760026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.760091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.760337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.760405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.760698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.760761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.761084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.761460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.761751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.761816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.762111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.762176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.762441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.762508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.762762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.762829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.763079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.763145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.763434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.763499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.763783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.763848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.764089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.540 [2024-11-20 07:27:52.764154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.540 qpair failed and we were unable to recover it. 00:25:49.540 [2024-11-20 07:27:52.764440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.764506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.764721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.764786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.765086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.765155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.765432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.765500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.765780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.765847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.766047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.766369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.766437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.766734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.766801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.767017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.767085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.767348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.767416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.767707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.767774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.768022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.768086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.768369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.768437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.768764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.769027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.769093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.769344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.769413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.769708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.769774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.769978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.770064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.770331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.770399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.770634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.770699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.770944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.771009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.771221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.771289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.771522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.771587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.771845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.771914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.772133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.772201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.772475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.772543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.772801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.772868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.773154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.773220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.773505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.773572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.773833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.773899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.774160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.774227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.541 [2024-11-20 07:27:52.774557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.541 [2024-11-20 07:27:52.774623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.541 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.774880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.774945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.775202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.775267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.775532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.775597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.775822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.775887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.776174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.776239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.776535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.776602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.776860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.776928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.777215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.777280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.777593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.777659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.777942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.778008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.778265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.778346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.778584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.778651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.778945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.779013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.779316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.779382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.779592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.779661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.779907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.779973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.780183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.780249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.780525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.780591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.780873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.780938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.781180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.781246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.781525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.781591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.781825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.781890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.782174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.782240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.782509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.782575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.782824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.782890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.783167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.783516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.783583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.783874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.783940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.542 qpair failed and we were unable to recover it. 00:25:49.542 [2024-11-20 07:27:52.784181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.542 [2024-11-20 07:27:52.784246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.784529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.784596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.784883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.784948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.785239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.785317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.785520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.785587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.785885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.785951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.786165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.786230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.786508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.786574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.786859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.786925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.787122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.787187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.787461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.787528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.787821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.787888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.788177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.788243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.788517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.788584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.788876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.788941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.789221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.789286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.789562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.789629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.789900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.789965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.790211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.790279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.790547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.790614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.790906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.790971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.791219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.791286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.791564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.791631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.791925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.791990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.792293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.792373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.792627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.792693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.792954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.793317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.793384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.793633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.793702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.793964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.794029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.794285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.794367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.794655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.794721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.795019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.795085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.795371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.795439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.795728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.543 [2024-11-20 07:27:52.795794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.543 qpair failed and we were unable to recover it. 00:25:49.543 [2024-11-20 07:27:52.796080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.796145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.796392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.796460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.796724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.796800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.797097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.797162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.797417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.797495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.797794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.797860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.798075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.798141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.798438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.798505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.798788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.798852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.799176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.799478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.799544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.799831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.799896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.800181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.800246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.800557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.800625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.800887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.800952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.801254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.801332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.801611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.801677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.801875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.801940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.802196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.802263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.802579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.802646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.802925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.802991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.803352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.803636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.803703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.803999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.804063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.804325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.804392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.804694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.804760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.805045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.805111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.805379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.805447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.805727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.544 [2024-11-20 07:27:52.805792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.544 qpair failed and we were unable to recover it. 00:25:49.544 [2024-11-20 07:27:52.806053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.806120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.806414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.806482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.806725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.806791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.807077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.807144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.807432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.807499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.807752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.807818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.808075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.808141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.808406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.808473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.808704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.808768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.809050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.809115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.809407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.809475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.809734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.809801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.810094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.810159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.810455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.810533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.810751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.810816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.811058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.811123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.811352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.811420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.811629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.811693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.811939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.812007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.812248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.812345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.812625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.812693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.812953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.813020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.813330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.813397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.813647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.813713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.813992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.814059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.814354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.814422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.814703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.814769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.814992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.815060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.815324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.545 [2024-11-20 07:27:52.815392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.545 qpair failed and we were unable to recover it. 00:25:49.545 [2024-11-20 07:27:52.815676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.815742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.816037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.816102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.816319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.816386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.816657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.816723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.816933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.816998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.817221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.817286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.817551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.817618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.817915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.817980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.818273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.818368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.818625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.818691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.818888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.818956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.819264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.819347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.819613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.819679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.820029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.820283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.820362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.820619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.820685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.820941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.821006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.821294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.821372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.821630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.821697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.821981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.822046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.822251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.822341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.822606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.822672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.822958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.823023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.823295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.823378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.823667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.823744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.824032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.824098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.824400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.824469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.824693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.824761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.825018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.825085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.825342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.825409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.825660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.825726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.546 [2024-11-20 07:27:52.826029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.546 [2024-11-20 07:27:52.826095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.546 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.826356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.826424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.826701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.826767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.827070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.827136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.827393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.827463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.827752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.827818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.828115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.828180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.828492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.828561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.828849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.828914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.829166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.829232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.829479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.829549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.829793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.829858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.830155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.830220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.830523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.830591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.830892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.830959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.831207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.831273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.831545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.831610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.831874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.831939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.832189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.832257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.832569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.832635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.832947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.833014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.833260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.833346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.833585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.833650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.833916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.833983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.834266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.834361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.834574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.834643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.834910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.834976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.835237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.835322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.835576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.835645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.835911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.835980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.836227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.836295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.836595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.547 [2024-11-20 07:27:52.836662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.547 qpair failed and we were unable to recover it. 00:25:49.547 [2024-11-20 07:27:52.836913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.836981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.837198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.837274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.837552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.837619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.837834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.837902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.838201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.838267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.838553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.838620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.838876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.838942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.839240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.839323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.839584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.839650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.839882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.839947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.840236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.840320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.840619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.840686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.840944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.841009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.841299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.841385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.841599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.841666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.841976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.842041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.842298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.842394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.842675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.842742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.843002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.843068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.843362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.843429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.843689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.843755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.844035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.844102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.844375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.844442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.844695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.844761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.845024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.845091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.845294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.845374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.845631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.845698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.845916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.845982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.846246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.846346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.846605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.846672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.846879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.846953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.847207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.847273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.847591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.847661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.847946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.548 [2024-11-20 07:27:52.848013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.548 qpair failed and we were unable to recover it. 00:25:49.548 [2024-11-20 07:27:52.848275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.848356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.848646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.848712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.848973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.849040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.849344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.849411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.849662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.849728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.850024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.850089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.850387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.850456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.850745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.850821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.851080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.851146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.851400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.851470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.851722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.851788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.852006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.852072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.852371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.852438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.852723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.852788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.853046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.853113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.853354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.853420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.853624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.853691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.853965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.854031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.854399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.854689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.854755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.854971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.855039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.855267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.855347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.855580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.855646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.855902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.855969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.856239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.856318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.549 [2024-11-20 07:27:52.856575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.549 [2024-11-20 07:27:52.856642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.549 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.856940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.857006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.857257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.857339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.857595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.857664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.857883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.857951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.858252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.858345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.858600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.858669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.858910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.858977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.859216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.859281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.859611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.859679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.859970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.860036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.860288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.860372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.860626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.860695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.860953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.861019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.861330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.861398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.861655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.861722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.861999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.862065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.862348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.862416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.862681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.862748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.863001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.863069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.863341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.863410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.863668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.863736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.863978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.864056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.864353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.864422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.864671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.864738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.864965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.865031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.865289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.865368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.865625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.865691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.865971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.866038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.866289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.866388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.866603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.866669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.866867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.866933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.550 [2024-11-20 07:27:52.867228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.550 [2024-11-20 07:27:52.867294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.550 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.867569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.867635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.867930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.867996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.868253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.868334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.868605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.868671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.868969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.869035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.869278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.869360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.869576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.869644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.869900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.869967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.870256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.870337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.870630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.870695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.870944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.871012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.871315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.871383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.871638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.871705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.871984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.872050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.872272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.872354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.872647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.872713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.872953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.873280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.873373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.873629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.873694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.873941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.874007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.874317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.874386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.874679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.874744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.874965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.875033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.875289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.875370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.875661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.875726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.876021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.876087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.876372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.876440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.876737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.876802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.877099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.877166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.877457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.877536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.877797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.877864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.878155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.878221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.878453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.551 [2024-11-20 07:27:52.878521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.551 qpair failed and we were unable to recover it. 00:25:49.551 [2024-11-20 07:27:52.878757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.878824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.879084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.879151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.879440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.879507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.879803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.879869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.880219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.880492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.880558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.880842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.881196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.881262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.881550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.881617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.881844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.881910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.882150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.882215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.882457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.882525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.882777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.882844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.883125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.883191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.883494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.883562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.883808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.883876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.884165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.884230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.884512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.884580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.884825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.884891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.885137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.885201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.885506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.885572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.885870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.885935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.886194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.886259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.886514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.886581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.886824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.886890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.887144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.887210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.887485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.887553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.887805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.887871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.888128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.888196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.888475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.888543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.888828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.888893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.552 qpair failed and we were unable to recover it. 00:25:49.552 [2024-11-20 07:27:52.889168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.552 [2024-11-20 07:27:52.889234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.889530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.889871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.889936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.890184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.890250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.890555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.890624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.890924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.891000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.891263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.891350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.891565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.891634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.891895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.891961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.892246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.892326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.892611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.892679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.892935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.893000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.893280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.893379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.893655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.893724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.893981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.894048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.894342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.894411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.894674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.894741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.894985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.895051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.895339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.895408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.895702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.895768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.896047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.896113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.896358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.896425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.896716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.896782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.897047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.897113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.897404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.897473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.897746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.897815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.898050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.898117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.898407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.898475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.898773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.898841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.899090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.899159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.899417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.899487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.899745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.899810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.900075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.900142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.900392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.900462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.553 [2024-11-20 07:27:52.900752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.553 [2024-11-20 07:27:52.900818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.553 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.901085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.901152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.901448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.901517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.901808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.901874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.902173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.902239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.902508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.902578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.902868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.902934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.903189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.903255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.903498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.903564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.903783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.903852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.904060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.904128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.904362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.904442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.904726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.904792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.905041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.905110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.905399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.905467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.905762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.905828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.906083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.906149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.906357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.906425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.906681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.906748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.906966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.907031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.907272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.907359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.907610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.907676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.907917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.907982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.908235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.908301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.908579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.908648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.908944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.909010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.909296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.909381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.909583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.909651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.909862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.909928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.554 [2024-11-20 07:27:52.910214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.554 [2024-11-20 07:27:52.910280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.554 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.910555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.910623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.910830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.910899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.911161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.911227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.911468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.911536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.911732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.911797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.912021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.912087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.912336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.912404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.912622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.912688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.912987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.913053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.913361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.913429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.913687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.913752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.914036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.914102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.914348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.914416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.914724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.914789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.915034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.915100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.915340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.915409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.915670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.915734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.915926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.915995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.916245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.916330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.916656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.916722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.917006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.917071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.917369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.917448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.917704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.917770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.918059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.918125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.918384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.918451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.918739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.918804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.919055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.919121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.919407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.919474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.919763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.919829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.920122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.920187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.920470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.920537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.920824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.920890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.921132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.921200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.555 [2024-11-20 07:27:52.921475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.555 [2024-11-20 07:27:52.921543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.555 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.921793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.921858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.922137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.922203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.922528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.922595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.922886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.922952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.923191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.923258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.923533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.923601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.923851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.923918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.924175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.924241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.924555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.924622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.924928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.924994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.925213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.925281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.925594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.925661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.925879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.925944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.926224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.926290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.926652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.926749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.927012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.927080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.927324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.927393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.927627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.927692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.927936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.928001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.928257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.928339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.928593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.928660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.928876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.928942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.929151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.929216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.929486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.929555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.929796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.929861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.930088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.930153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.930389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.930457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.930704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.930769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.931071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.931136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.931390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.931456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.556 [2024-11-20 07:27:52.931725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.556 [2024-11-20 07:27:52.931790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.556 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.932084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.932148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.932443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.932509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.932778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.932844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.933078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.933142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.933384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.933450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.933710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.933776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.934018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.934082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.934336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.934403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.934657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.934723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.934944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.935008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.935253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.935342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.935595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.935662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.935961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.936025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.936253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.936332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-20 07:27:52.936592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-20 07:27:52.936657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.936913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.936978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.937184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.937249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.937514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.937580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.937804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.937869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.938152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.938216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.938469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.938535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.938765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.842 [2024-11-20 07:27:52.938830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.842 qpair failed and we were unable to recover it. 00:25:49.842 [2024-11-20 07:27:52.939126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.939190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.939437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.939503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.939712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.939777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.940063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.940128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.940349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.940415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.940617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.940682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.940920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.940986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.941196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.941261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.941522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.941588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.941840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.941904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.942227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.942530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.942595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.942817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.942882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.943162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.943228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.943497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.943562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.943806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.943884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.944180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.944245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.944514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.944580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.944806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.944870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.945157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.945221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.945494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.945559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.945811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.945876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.946076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.946144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.946341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.946407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.946692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.946757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.947000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.947065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.947365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.947430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.947685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.947768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.947993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.948060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.948337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.948403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.948674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.948739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.949024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.949089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.949353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.949420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.949674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.843 [2024-11-20 07:27:52.949740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.843 qpair failed and we were unable to recover it. 00:25:49.843 [2024-11-20 07:27:52.949953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.950017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.950257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.950341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.950593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.950658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.950947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.951011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.951321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.951388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.951654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.951721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.951980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.952044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.952285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.952368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.952624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.952690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.952966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.953030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.953291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.953386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.953669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.953734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.954031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.954096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.954414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.954702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.954768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.955033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.955099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.955334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.955401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.955675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.955740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.956033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.956099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.956354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.956421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.956680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.956745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.957001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.957065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.957356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.957424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.957715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.957779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.958009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.958076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.958346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.958413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.958664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.958728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.958948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.959014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.959274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.959353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.959608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.959673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.844 [2024-11-20 07:27:52.959971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.844 [2024-11-20 07:27:52.960035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.844 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.960297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.960375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.960662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.960726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.960973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.961041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.961349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.961416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.961700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.961767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.962021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.962087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.962355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.962423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.962706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.962771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.963026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.963093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.963384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.963452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.963711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.963776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.964028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.964092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.964377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.964445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.964697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.964762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.965041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.965106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.965394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.965459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.965690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.965755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.965964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.966032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.966291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.966385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.966689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.966754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.967055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.967120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.967359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.967425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.967646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.967713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.968008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.968073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.968336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.968401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.968650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.968715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.968944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.969010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.969259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.969346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.845 qpair failed and we were unable to recover it. 00:25:49.845 [2024-11-20 07:27:52.969602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.845 [2024-11-20 07:27:52.969669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.969959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.970025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.970283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.970365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.970650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.970715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.970987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.971052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.971337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.971403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.971693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.971757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.972017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.972082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.972344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.972409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.972654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.972720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.972954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.973020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.973261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.973351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.973640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.973706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.973930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.973995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.974240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.974579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.974643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.974925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.974989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.975234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.975329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.975537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.975601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.975883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.975950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.976190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.976255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.976501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.976566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.976852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.976917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.977156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.977221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.977544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.977611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.977908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.977973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.978232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.978296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.978602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.978667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.978914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.978979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.979193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.979258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.979570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.846 [2024-11-20 07:27:52.979636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.846 qpair failed and we were unable to recover it. 00:25:49.846 [2024-11-20 07:27:52.979904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.979968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.980255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.980338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.980603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.980668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.980915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.980979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.981170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.981234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.981495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.981561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.981856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.981920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.982136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.982201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.982501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.982567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.982868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.982933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.983222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.983287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.983594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.983659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.983914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.983979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.984225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.984330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.984611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.984676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.984926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.984991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.985220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.985285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.985571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.985637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.985931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.985996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.986204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.986269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.986552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.986617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.986864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.986927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.987182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.987247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.987531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.987597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.987841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.987907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.988208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.988273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.988532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.988597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.988896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.988960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.847 [2024-11-20 07:27:52.989186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.847 [2024-11-20 07:27:52.989251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.847 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.989551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.989617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.989868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.989932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.990165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.990230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.990513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.990580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.990803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.990867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.991124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.991189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.991464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.991531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.991849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.992132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.992197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.992510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.992577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.992825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.992889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.993181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.993245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.993554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.993619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.993910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.993975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.994226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.994292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.994563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.994629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.994879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.994944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.995140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.995203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.995422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.995488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.995751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.995817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.996065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.996129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.996377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.996445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.996662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.996730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.997009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.997073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.997328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.997397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.848 qpair failed and we were unable to recover it. 00:25:49.848 [2024-11-20 07:27:52.997658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.848 [2024-11-20 07:27:52.997725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.997935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.998001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.998205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.998270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.998542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.998608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.998867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.998932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.999123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.999187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.999468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.999535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:52.999733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:52.999798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.000051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.000117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.000324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.000392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.000644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.000709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.000901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.000969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.001259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.001359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.001655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.001719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.001984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.002049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.002293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.002378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.002644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.002709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.002966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.003031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.003337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.003404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.003692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.003757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.003977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.004043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.004316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.004383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.004645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.004710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.004887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.004952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.005162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.005226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.005488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.005554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.005798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.005863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.006114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.006188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.006433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.006499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.006745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.006813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.007106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.007170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.007429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.849 [2024-11-20 07:27:53.007496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.849 qpair failed and we were unable to recover it. 00:25:49.849 [2024-11-20 07:27:53.007754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.007820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.008067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.008130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.008368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.008434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.008686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.008752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.008970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.009037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.009354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.009421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.009666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.009732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.009982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.010047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.010363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.010670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.010736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.011014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.011080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.011338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.011405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.011649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.011716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.011952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.012323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.012390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.012645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.012711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.012971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.013036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.013331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.013398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.013647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.013713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.013997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.014062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.014316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.014383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.014679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.014744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.015024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.015099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.015366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.015432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.015689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.015997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.016062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.016327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.016395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.016649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.016713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.016962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.017028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.017348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.017414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.017666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.017731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.017979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.850 [2024-11-20 07:27:53.018045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.850 qpair failed and we were unable to recover it. 00:25:49.850 [2024-11-20 07:27:53.018299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.018379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.018589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.018857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.018922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.019205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.019270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.019593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.019658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.019917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.019982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.020270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.020354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.020634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.020699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.020881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.020946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.021177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.021241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.021486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.021553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.021771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.021836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.022079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.022143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.022366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.022433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.022692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.022760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.023063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.023127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.023391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.023458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.023752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.023817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.024119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.024183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.024454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.024520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.024780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.024845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.025125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.025190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.025461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.025529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.025781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.025846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.026143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.026208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.026525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.026592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.026834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.026899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.027143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.027211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.027527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.851 [2024-11-20 07:27:53.027593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.851 qpair failed and we were unable to recover it. 00:25:49.851 [2024-11-20 07:27:53.027847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.027912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.028119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.028185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.028460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.028527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.028814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.028879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.029200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.029517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.029582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.029830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.029895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.030175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.030240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.030469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.030535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.030819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.030885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.031165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.031230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.031541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.031608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.031860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.031927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.032230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.032295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.032533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.032601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.032865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.032931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.033232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.033296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.033608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.033674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.033887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.033921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.034040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.034074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.034281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.034383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.034529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.034562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.034700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.034732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.034843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.034876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.034985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.035018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.035219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.035284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.035526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.035561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.035744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.035809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.036042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.036118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.036367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.036408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.036582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.036616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.036808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.036876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.037164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.852 [2024-11-20 07:27:53.037217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.852 qpair failed and we were unable to recover it. 00:25:49.852 [2024-11-20 07:27:53.037418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.037454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.037569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.037650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.037947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.038011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.038233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.038298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.038482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.038517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.038756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.038791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.038901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.038936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.039089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.039154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.039379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.039412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.039560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.039593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.039732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.039766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.040024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.040090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.040370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.040406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.040544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.040578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.040778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.040812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.040988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.041022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.041285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.041337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.041454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.041486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.041630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.041692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.041886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.041950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.042242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.042323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.042505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.042538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.042669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.042702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.042844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.042882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.043076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.043141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.043427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.043462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.043582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.043616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.043755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.043788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.044025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.044090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.044446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.044481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.044628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.853 [2024-11-20 07:27:53.044689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.853 qpair failed and we were unable to recover it. 00:25:49.853 [2024-11-20 07:27:53.044947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.045011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.045266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.045300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.045436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.045473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.045654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.045719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.046005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.046071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.046367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.046402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.046520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.046554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.046694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.046727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.046912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.046975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.047252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.047333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.047501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.047535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.047691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.047756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.048030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.048064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.048321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.048356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.048532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.048566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.048834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.048899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.049160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.049225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.049449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.049603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.049641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.049885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.049965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.050218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.050287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.050577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.050611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.050756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.050790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.051032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.051100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.051381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.051415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2614485 Killed "${NVMF_APP[@]}" "$@" 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.051571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.051605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.051800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.051866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.052103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:49.854 [2024-11-20 07:27:53.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.854 qpair failed and we were unable to recover it. 00:25:49.854 [2024-11-20 07:27:53.052242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.854 [2024-11-20 07:27:53.052278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.052433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.855 [2024-11-20 07:27:53.052469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.855 [2024-11-20 07:27:53.052630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.052716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.052972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.053039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.053263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.053352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.053603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.053668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.053955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.054021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.054267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.054352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.054626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.054691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.054968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.055034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.055281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.055368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.055628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.055693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.055979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.056044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.056286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.056378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.056604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.056638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.056784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.056819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2614922 00:25:49.855 [2024-11-20 07:27:53.056929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:49.855 [2024-11-20 07:27:53.056964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2614922 00:25:49.855 [2024-11-20 07:27:53.057223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2614922 ']' 00:25:49.855 [2024-11-20 07:27:53.057291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:49.855 [2024-11-20 07:27:53.057628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.057696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:49.855 [2024-11-20 07:27:53.057938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.058004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.058260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.058347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.058594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.058773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.058807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.058920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.855 [2024-11-20 07:27:53.058955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.855 qpair failed and we were unable to recover it. 00:25:49.855 [2024-11-20 07:27:53.060504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.060538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.060779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.060831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.060953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.061145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.061289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.061451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.061600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.061750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.061907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.062912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.062940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.063911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.063939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.856 qpair failed and we were unable to recover it. 00:25:49.856 [2024-11-20 07:27:53.064962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.856 [2024-11-20 07:27:53.064990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.065143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.065264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.065457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.065601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.065752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.065904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.065995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.066958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.066986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.067963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.067991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.068922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.068950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.069043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.069072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.069236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.069337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.857 [2024-11-20 07:27:53.069366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.857 qpair failed and we were unable to recover it. 00:25:49.857 [2024-11-20 07:27:53.069489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.069518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.069608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.069636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.069738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.069766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.069914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.069942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.070942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.070970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.071933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.071963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.072919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.072947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.073099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.073127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.073251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.073279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.073437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.073480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.073585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.073616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.073735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.073781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.073996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.858 [2024-11-20 07:27:53.074026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.858 qpair failed and we were unable to recover it. 00:25:49.858 [2024-11-20 07:27:53.074126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.074247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.074377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.074514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.074656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.074800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.074946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.074974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.075916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.075945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.076937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.076965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.077117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.077145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.077260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.077419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.077447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.077546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.077574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.077672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.077700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.859 [2024-11-20 07:27:53.077815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.859 [2024-11-20 07:27:53.077842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.859 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.077966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.077994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.078920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.078949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.079102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.079260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.079407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.079573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.079714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.079865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.079985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.080933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.080963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.860 [2024-11-20 07:27:53.081893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.860 [2024-11-20 07:27:53.081921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.860 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.082020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.082049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.082161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.082190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.082967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.082998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.083166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.083332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.083454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.083563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.083716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.083869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.083985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.084972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.084999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.085915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.085945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.086030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.086057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.086169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.086195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.086283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.086323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.086426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.086457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.086569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.086596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.861 [2024-11-20 07:27:53.086687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.861 [2024-11-20 07:27:53.086714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.861 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.086817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.086844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.086938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.086965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.087080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.087106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.087219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.087245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.087357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.087384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.087472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.087501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.087626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.087653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.087857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.087883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.088922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.088948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.089920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.089951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.862 [2024-11-20 07:27:53.090800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.862 [2024-11-20 07:27:53.090826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.862 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.090914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.090941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.091873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.091900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.092951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.092991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.093942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.093968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.863 qpair failed and we were unable to recover it. 00:25:49.863 [2024-11-20 07:27:53.094841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.863 [2024-11-20 07:27:53.094867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.094994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.095917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.095944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.096934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.096961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.097894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.097921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.098848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.098995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.099022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.099114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.099142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.099283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.864 [2024-11-20 07:27:53.099325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.864 qpair failed and we were unable to recover it. 00:25:49.864 [2024-11-20 07:27:53.099411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.099438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.099528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.099555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.099684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.099710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.099805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.099833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.099931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.099959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.100902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.100929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.101902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.101991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.102911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.102937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.103053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.865 [2024-11-20 07:27:53.103080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.865 qpair failed and we were unable to recover it. 00:25:49.865 [2024-11-20 07:27:53.103167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.103269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.103402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.103514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.103668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.103786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.103947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.103973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.104960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.104987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.105857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.105893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.866 qpair failed and we were unable to recover it. 00:25:49.866 [2024-11-20 07:27:53.106926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.866 [2024-11-20 07:27:53.106953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.107970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.107998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.108894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.108922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109432] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:25:49.867 [2024-11-20 07:27:53.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109514] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.867 [2024-11-20 07:27:53.109567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.109889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.109914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.110937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.110966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.867 [2024-11-20 07:27:53.111058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.867 [2024-11-20 07:27:53.111087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.867 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.111967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.111994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.112877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.112904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.113869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.113897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.114902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.114928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.115018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.115044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.115139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.115179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.115320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.115349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.868 [2024-11-20 07:27:53.115432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.868 [2024-11-20 07:27:53.115462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.868 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.115552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.115579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.115670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.115698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.115782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.115808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.115926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.115953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.116865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.116984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.117905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.117931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.118877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.118903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.119041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.119153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.119264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.119381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.119494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.869 [2024-11-20 07:27:53.119604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.869 qpair failed and we were unable to recover it. 00:25:49.869 [2024-11-20 07:27:53.119681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.119707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.119811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.119837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.119959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.119992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.120968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.120996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.121894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.121980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.122893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.122919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.123008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.123034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.123153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.123181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.123298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.123334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.870 qpair failed and we were unable to recover it. 00:25:49.870 [2024-11-20 07:27:53.123430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.870 [2024-11-20 07:27:53.123458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.123541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.123568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.123659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.123686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.123804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.123831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.123918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.123945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.124892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.124928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.125873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.125901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.126936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.126962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.127050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.127078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.127173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.127199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.127278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.127311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.127391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.871 [2024-11-20 07:27:53.127418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.871 qpair failed and we were unable to recover it. 00:25:49.871 [2024-11-20 07:27:53.127508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.127534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.127647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.127691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.127770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.127797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.127877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.127903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.127995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.128932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.128958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.129067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.129339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.129461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.129607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.129729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.129867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.129982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.130915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.130942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.131056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.131083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.131174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.131202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.131320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.131360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.131458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.131486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.131579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.872 [2024-11-20 07:27:53.131719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.872 [2024-11-20 07:27:53.131746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.872 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.131864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.131891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.132794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.132839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.133913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.133949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.134937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.134965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.135122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.135151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.135249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.135278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.135418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.135459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.135549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.135578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.135687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.135727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.135873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.135901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.136866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.136911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.137146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.137254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.137395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.137515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.137635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.873 [2024-11-20 07:27:53.137813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.873 qpair failed and we were unable to recover it. 00:25:49.873 [2024-11-20 07:27:53.137903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.137929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.138892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.138919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.139900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.139927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.140072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.140223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.140375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.140480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.140615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.140792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.140956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.141985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.874 [2024-11-20 07:27:53.142013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.874 qpair failed and we were unable to recover it. 00:25:49.874 [2024-11-20 07:27:53.142104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.142131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.142243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.142282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.142408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.142437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.142531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.142559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.142689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.142716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.142836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.142864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.142985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.143889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.143917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.144873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.144899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.145961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.145989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.875 qpair failed and we were unable to recover it. 00:25:49.875 [2024-11-20 07:27:53.146076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.875 [2024-11-20 07:27:53.146102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.146210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.146250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.146410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.146439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.146537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.146566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.146664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.146691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.146790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.146817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.146921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.146960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.147891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.147978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.148009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.149933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.149959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.150053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.150079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.150178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.150204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.150341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.150380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.150472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.150498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.150582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.150608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.876 qpair failed and we were unable to recover it. 00:25:49.876 [2024-11-20 07:27:53.150751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.876 [2024-11-20 07:27:53.150777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.150892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.150925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.151935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.151960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.152081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.152236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.152395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.152541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.152705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.152859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.152975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.153911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.153996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.154934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.877 [2024-11-20 07:27:53.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.877 qpair failed and we were unable to recover it. 00:25:49.877 [2024-11-20 07:27:53.155066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.155870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.156960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.156987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.157915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.157942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.878 [2024-11-20 07:27:53.158970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.878 [2024-11-20 07:27:53.158997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.878 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.159955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.159984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.160866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.160988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.161953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.161988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.162888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.162926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.163057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.163085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.879 [2024-11-20 07:27:53.163205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.879 [2024-11-20 07:27:53.163232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.879 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.163330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.163357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.163447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.163474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.163558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.163586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.163722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.163749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.163863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.163891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.163987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.164880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.164906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.165921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.165947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.166960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.166986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.167082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.167109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.167197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.167224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.167323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.880 [2024-11-20 07:27:53.167351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.880 qpair failed and we were unable to recover it. 00:25:49.880 [2024-11-20 07:27:53.167440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.167466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.167557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.167584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.167712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.167738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.167829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.167856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.167939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.167966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.168941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.168971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.169965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.169993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.170889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.170979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.171006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.881 qpair failed and we were unable to recover it. 00:25:49.881 [2024-11-20 07:27:53.171120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.881 [2024-11-20 07:27:53.171148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.171260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.171287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.171390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.171418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.171541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.171567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.171684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.171710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.171823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.171850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.171968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.171995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.172875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.172903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.173933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.173961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.174935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.174961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.175078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.175105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.175185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.175211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.882 [2024-11-20 07:27:53.175296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.882 [2024-11-20 07:27:53.175331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.882 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.175438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.175465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.175551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.175577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.175685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.175711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.175820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.175845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.175957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.175983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.177947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.177973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.178904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.178933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.179056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.179170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.179315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.179426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.179537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.883 [2024-11-20 07:27:53.179676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.883 qpair failed and we were unable to recover it. 00:25:49.883 [2024-11-20 07:27:53.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.179814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.179898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.179925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.180881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.180996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.181952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.181978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.182947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.182974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.183099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.183138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.183241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.183269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.183360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.183387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.183502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.183528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.183622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.183648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.884 qpair failed and we were unable to recover it. 00:25:49.884 [2024-11-20 07:27:53.183742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.884 [2024-11-20 07:27:53.183768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.183885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.183913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.184898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.184924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.185912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.185996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.186900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.186931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.187940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.885 [2024-11-20 07:27:53.188059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.885 [2024-11-20 07:27:53.188086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.885 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.188224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.188372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.188514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.188627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.188741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.188880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.188979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.189946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.189976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.886 [2024-11-20 07:27:53.190354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.190882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.190909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.886 [2024-11-20 07:27:53.191843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.886 [2024-11-20 07:27:53.191869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.886 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.191958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.191985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.192918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.192945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.193927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.193953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.194858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.194971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.195838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.195976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.196003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.196163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.196193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.196289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.196325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.196417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.196445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.196531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.887 [2024-11-20 07:27:53.196558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.887 qpair failed and we were unable to recover it. 00:25:49.887 [2024-11-20 07:27:53.196669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.196696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.196784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.196811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.196922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.196950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.197968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.197996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.198898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.198987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.199925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.199951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.200973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-20 07:27:53.200999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-20 07:27:53.201107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.201267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.201408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.201526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.201672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.201806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.201913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.201938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.202938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.202964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.203853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.203973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.204890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.204974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.205000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.205109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.205140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.205235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.205262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.205418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.205448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.205565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.205592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-20 07:27:53.205746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-20 07:27:53.205773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.205871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.205898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.205985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.206912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.206945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.207928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.207954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.208904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.208931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.209891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.209981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.210009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.210105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.210134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.210226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.210255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.210369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-20 07:27:53.210479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-20 07:27:53.210507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.210618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.210644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.210735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.210762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.210848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.210877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.210991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.211952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.211980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.212894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.212978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.213950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.213976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.214094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.214120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.214215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.214254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.214359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.214388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.214473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.214501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.214614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.214641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-20 07:27:53.214733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-20 07:27:53.214760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.214852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.214879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.214963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.214990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.215887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.215913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.216957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.216984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.217961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.217987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.218871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.218991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-20 07:27:53.219879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-20 07:27:53.219958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.219985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.220968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.220995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.221897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.221976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.222894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.222920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.223948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.223975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.224057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.224083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.224207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.224247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.224363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.224393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.224509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.224537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-20 07:27:53.224660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-20 07:27:53.224687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.224774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.224800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.224915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.224943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.225993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.226926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.226952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.227894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.227986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.228864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.228982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.229010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.229096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.229122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.229236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.229276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.229376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.229404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.229511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.229538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-20 07:27:53.229622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-20 07:27:53.229648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.229724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.229750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.229835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.229862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.229961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.229986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.230902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.230985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.231883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.231910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.232875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.232901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.233956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.233982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.234865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-20 07:27:53.234990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-20 07:27:53.235019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.235973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.235999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.236946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.236973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.237933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.237959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-20 07:27:53.238946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-20 07:27:53.238972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.239106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.239146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.239240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.239270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.239435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.239474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.239590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.239618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.239737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.239763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.239873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.239899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.240908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.240936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.241869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.241896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.242860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.242984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.243012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.243102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.243128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.243233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-20 07:27:53.243260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-20 07:27:53.243358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.243385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.243497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.243523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.243606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.243631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.243748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.243776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.243862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.243890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.244891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.244984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.245928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.245954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.246038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.246064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.246172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.246197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-20 07:27:53.246315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-20 07:27:53.246341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.246452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.246478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.246567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.246593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.246703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.246728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.246814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.246842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.246930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.246958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.247958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.247985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.248920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.248948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.249849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.249887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.250004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.250032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-11-20 07:27:53.250126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-11-20 07:27:53.250153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.250265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.250292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.250393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.250419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.250506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.250532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.250632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.250659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.250771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.250801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.250888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.250920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.251897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.251924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.170 [2024-11-20 07:27:53.252885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.252907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.170 [2024-11-20 07:27:53.252912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 [2024-11-20 07:27:53.252922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.252935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.170 [2024-11-20 07:27:53.252945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.170 [2024-11-20 07:27:53.253007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.253948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.254102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.254129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.254227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.254257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.254382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.254479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.254506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-11-20 07:27:53.254609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-11-20 07:27:53.254535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:50.171 [2024-11-20 07:27:53.254640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.254561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:50.171 [2024-11-20 07:27:53.254744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.254610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:50.171 [2024-11-20 07:27:53.254613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.171 [2024-11-20 07:27:53.254771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.254857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.254882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.254996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.255967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.255994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.256948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.256975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.257102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.257254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.257388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.257545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.257732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.257850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.257976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.258896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.258984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.259012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.259128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.259154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-11-20 07:27:53.259245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-11-20 07:27:53.259274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.259381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.259410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.259488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.259515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.259603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.259631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.259716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.259743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.259832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.259859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.259949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.259977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.260948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.260976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.261899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.261926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.262942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.262969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.172 qpair failed and we were unable to recover it. 00:25:50.172 [2024-11-20 07:27:53.263922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.172 [2024-11-20 07:27:53.263948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.264964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.264993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.265863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.265977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.266895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.266923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.173 [2024-11-20 07:27:53.267839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.173 [2024-11-20 07:27:53.267865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.173 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.267954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.267983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.268871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.268963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.269933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.269960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.270936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.270962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.271950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.271975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.272061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.272087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.272178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.272206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.272298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.272341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.272440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.272468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.174 [2024-11-20 07:27:53.272569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.174 [2024-11-20 07:27:53.272596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.174 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.272693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.272719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.272803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.272829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.272938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.272963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.273938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.273963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.274932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.274960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.275941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.275968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.276956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.276985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.277073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.277101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.277186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.277214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.175 [2024-11-20 07:27:53.277333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.175 [2024-11-20 07:27:53.277373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.175 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.277460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.277487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.277573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.277600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.277713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.277740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.277853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.277880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.277978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.278931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.278958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.279888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.280897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.280925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.281865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.281891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.176 qpair failed and we were unable to recover it. 00:25:50.176 [2024-11-20 07:27:53.282002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.176 [2024-11-20 07:27:53.282042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.282910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.282938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.283894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.283998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.284970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.284996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.285876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.285991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.286018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.286142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.286170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.286286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.286320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.286403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.286430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.286520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.286548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.286639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.177 [2024-11-20 07:27:53.286665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.177 qpair failed and we were unable to recover it. 00:25:50.177 [2024-11-20 07:27:53.286750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.286777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.286859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.286887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.286969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.286995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.287938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.287965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.288921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.288948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.289892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.289980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.290913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.178 [2024-11-20 07:27:53.290940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.178 qpair failed and we were unable to recover it. 00:25:50.178 [2024-11-20 07:27:53.291026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.291889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.291974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.292876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.292903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.293933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.293960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.294892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.294919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.295028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.295194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.295321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.295432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.295545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.179 [2024-11-20 07:27:53.295652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.179 qpair failed and we were unable to recover it. 00:25:50.179 [2024-11-20 07:27:53.295737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.295765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.295854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.295881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.295971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.295997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.296938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.296967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.297966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.297995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.298935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.298963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.299896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.299923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.300002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.300028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.300141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.300167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.300248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.180 [2024-11-20 07:27:53.300274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.180 qpair failed and we were unable to recover it. 00:25:50.180 [2024-11-20 07:27:53.300373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.300400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.300509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.300535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.300619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.300645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.300730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.300757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.300874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.300900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.300990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.301950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.301977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.302891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.302918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.303919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.303952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.181 [2024-11-20 07:27:53.304895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.181 qpair failed and we were unable to recover it. 00:25:50.181 [2024-11-20 07:27:53.304978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 A controller has encountered a failure and is being reset. 00:25:50.182 [2024-11-20 07:27:53.305100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce10000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce14000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.305914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce1c000b90 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.305998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.306025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2bfa0 with addr=10.0.0.2, port=4420 00:25:50.182 qpair failed and we were unable to recover it. 00:25:50.182 [2024-11-20 07:27:53.306151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.182 [2024-11-20 07:27:53.306202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b39f30 with addr=10.0.0.2, port=4420 00:25:50.182 [2024-11-20 07:27:53.306224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b39f30 is same with the state(6) to be set 00:25:50.182 [2024-11-20 07:27:53.306252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b39f30 (9): Bad file descriptor 00:25:50.182 [2024-11-20 07:27:53.306272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:50.182 [2024-11-20 07:27:53.306286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:50.182 [2024-11-20 07:27:53.306316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:50.182 Unable to reset the controller. 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 Malloc0 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 [2024-11-20 07:27:53.450767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 [2024-11-20 07:27:53.479030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.182 07:27:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2614509 00:25:51.115 Controller properly reset. 00:25:56.376 Initializing NVMe Controllers 00:25:56.376 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:56.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:56.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:56.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:56.376 Initialization complete. Launching workers. 00:25:56.376 Starting thread on core 1 00:25:56.376 Starting thread on core 2 00:25:56.376 Starting thread on core 3 00:25:56.376 Starting thread on core 0 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:56.376 00:25:56.376 real 0m10.704s 00:25:56.376 user 0m33.794s 00:25:56.376 sys 0m7.684s 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.376 ************************************ 00:25:56.376 END TEST nvmf_target_disconnect_tc2 00:25:56.376 ************************************ 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.376 rmmod nvme_tcp 00:25:56.376 rmmod nvme_fabrics 00:25:56.376 rmmod nvme_keyring 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2614922 ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2614922 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2614922 ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2614922 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2614922 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2614922' 00:25:56.376 killing process with pid 2614922 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2614922 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2614922 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.376 07:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.911 07:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:58.911 00:25:58.911 real 0m15.746s 00:25:58.911 user 0m59.773s 00:25:58.911 sys 0m10.144s 00:25:58.911 07:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:58.911 07:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:58.911 ************************************ 00:25:58.911 END TEST nvmf_target_disconnect 00:25:58.911 ************************************ 00:25:58.911 07:28:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:58.911 00:25:58.911 real 5m7.748s 00:25:58.911 user 11m9.572s 00:25:58.911 sys 1m17.045s 00:25:58.911 07:28:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:58.911 07:28:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.911 ************************************ 00:25:58.911 END TEST nvmf_host 00:25:58.911 ************************************ 00:25:58.911 07:28:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:58.911 07:28:01 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:58.911 07:28:01 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:58.911 07:28:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:58.911 07:28:01 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:58.911 07:28:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.911 ************************************ 00:25:58.911 START TEST nvmf_target_core_interrupt_mode 00:25:58.911 ************************************ 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:58.911 * Looking for test storage... 00:25:58.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:58.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.911 --rc genhtml_branch_coverage=1 00:25:58.911 --rc genhtml_function_coverage=1 00:25:58.911 --rc genhtml_legend=1 00:25:58.911 --rc geninfo_all_blocks=1 00:25:58.911 --rc geninfo_unexecuted_blocks=1 00:25:58.911 00:25:58.911 ' 00:25:58.911 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:58.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.911 --rc genhtml_branch_coverage=1 00:25:58.911 --rc genhtml_function_coverage=1 00:25:58.911 --rc genhtml_legend=1 00:25:58.911 --rc geninfo_all_blocks=1 00:25:58.912 --rc geninfo_unexecuted_blocks=1 00:25:58.912 00:25:58.912 ' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:58.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.912 --rc genhtml_branch_coverage=1 00:25:58.912 --rc genhtml_function_coverage=1 00:25:58.912 --rc genhtml_legend=1 00:25:58.912 --rc geninfo_all_blocks=1 00:25:58.912 --rc geninfo_unexecuted_blocks=1 00:25:58.912 00:25:58.912 ' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:58.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.912 --rc genhtml_branch_coverage=1 00:25:58.912 --rc genhtml_function_coverage=1 00:25:58.912 --rc genhtml_legend=1 00:25:58.912 --rc geninfo_all_blocks=1 00:25:58.912 --rc geninfo_unexecuted_blocks=1 00:25:58.912 00:25:58.912 ' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:58.912 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:58.912 ************************************ 00:25:58.912 START TEST nvmf_abort 00:25:58.912 ************************************ 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:58.912 * Looking for test storage... 00:25:58.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.912 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.913 --rc genhtml_branch_coverage=1 00:25:58.913 --rc genhtml_function_coverage=1 00:25:58.913 --rc genhtml_legend=1 00:25:58.913 --rc geninfo_all_blocks=1 00:25:58.913 --rc geninfo_unexecuted_blocks=1 00:25:58.913 00:25:58.913 ' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.913 --rc genhtml_branch_coverage=1 00:25:58.913 --rc genhtml_function_coverage=1 00:25:58.913 --rc genhtml_legend=1 00:25:58.913 --rc geninfo_all_blocks=1 00:25:58.913 --rc geninfo_unexecuted_blocks=1 00:25:58.913 00:25:58.913 ' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.913 --rc genhtml_branch_coverage=1 00:25:58.913 --rc genhtml_function_coverage=1 00:25:58.913 --rc genhtml_legend=1 00:25:58.913 --rc geninfo_all_blocks=1 00:25:58.913 --rc geninfo_unexecuted_blocks=1 00:25:58.913 00:25:58.913 ' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:58.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.913 --rc genhtml_branch_coverage=1 00:25:58.913 --rc genhtml_function_coverage=1 00:25:58.913 --rc genhtml_legend=1 00:25:58.913 --rc geninfo_all_blocks=1 00:25:58.913 --rc geninfo_unexecuted_blocks=1 00:25:58.913 00:25:58.913 ' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:25:58.913 07:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.821 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:00.822 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:00.822 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:00.822 Found net devices under 0000:09:00.0: cvl_0_0 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:00.822 Found net devices under 0000:09:00.1: cvl_0_1 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.822 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:26:01.082 00:26:01.082 --- 10.0.0.2 ping statistics --- 00:26:01.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.082 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:01.082 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:01.082 00:26:01.082 --- 10.0.0.1 ping statistics --- 00:26:01.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.082 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2617730 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2617730 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2617730 ']' 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:01.083 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.083 [2024-11-20 07:28:04.401723] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:01.083 [2024-11-20 07:28:04.402787] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:26:01.083 [2024-11-20 07:28:04.402842] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.083 [2024-11-20 07:28:04.478067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.342 [2024-11-20 07:28:04.539549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.342 [2024-11-20 07:28:04.539611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.342 [2024-11-20 07:28:04.539634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.342 [2024-11-20 07:28:04.539660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.342 [2024-11-20 07:28:04.539670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.342 [2024-11-20 07:28:04.541165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.342 [2024-11-20 07:28:04.541229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.342 [2024-11-20 07:28:04.541232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.342 [2024-11-20 07:28:04.635387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:01.342 [2024-11-20 07:28:04.635597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:01.342 [2024-11-20 07:28:04.635616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:01.342 [2024-11-20 07:28:04.635891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:01.342 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:01.342 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:26:01.342 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 [2024-11-20 07:28:04.685975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 Malloc0 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 Delay0 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 [2024-11-20 07:28:04.754138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.343 07:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:01.600 [2024-11-20 07:28:04.864710] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:04.126 Initializing NVMe Controllers 00:26:04.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:04.126 controller IO queue size 128 less than required 00:26:04.126 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:04.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:04.126 Initialization complete. Launching workers. 00:26:04.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28621 00:26:04.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28678, failed to submit 66 00:26:04.126 success 28621, unsuccessful 57, failed 0 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.126 07:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.126 rmmod nvme_tcp 00:26:04.126 rmmod nvme_fabrics 00:26:04.126 rmmod nvme_keyring 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2617730 ']' 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2617730 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2617730 ']' 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2617730 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2617730 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2617730' 00:26:04.126 killing process with pid 2617730 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2617730 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2617730 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.126 07:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:06.027 00:26:06.027 real 0m7.369s 00:26:06.027 user 0m9.325s 00:26:06.027 sys 0m2.896s 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:06.027 ************************************ 00:26:06.027 END TEST nvmf_abort 00:26:06.027 ************************************ 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:06.027 ************************************ 00:26:06.027 START TEST nvmf_ns_hotplug_stress 00:26:06.027 ************************************ 00:26:06.027 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:06.284 * Looking for test storage... 00:26:06.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:06.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.284 --rc genhtml_branch_coverage=1 00:26:06.284 --rc genhtml_function_coverage=1 00:26:06.284 --rc genhtml_legend=1 00:26:06.284 --rc geninfo_all_blocks=1 00:26:06.284 --rc geninfo_unexecuted_blocks=1 00:26:06.284 00:26:06.284 ' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:06.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.284 --rc genhtml_branch_coverage=1 00:26:06.284 --rc genhtml_function_coverage=1 00:26:06.284 --rc genhtml_legend=1 00:26:06.284 --rc geninfo_all_blocks=1 00:26:06.284 --rc geninfo_unexecuted_blocks=1 00:26:06.284 00:26:06.284 ' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:06.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.284 --rc genhtml_branch_coverage=1 00:26:06.284 --rc genhtml_function_coverage=1 00:26:06.284 --rc genhtml_legend=1 00:26:06.284 --rc geninfo_all_blocks=1 00:26:06.284 --rc geninfo_unexecuted_blocks=1 00:26:06.284 00:26:06.284 ' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:06.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.284 --rc genhtml_branch_coverage=1 00:26:06.284 --rc genhtml_function_coverage=1 00:26:06.284 --rc genhtml_legend=1 00:26:06.284 --rc geninfo_all_blocks=1 00:26:06.284 --rc geninfo_unexecuted_blocks=1 00:26:06.284 00:26:06.284 ' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:06.284 07:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:08.877 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:08.877 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:08.877 Found net devices under 0000:09:00.0: cvl_0_0 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:08.877 Found net devices under 0000:09:00.1: cvl_0_1 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:26:08.877 00:26:08.877 --- 10.0.0.2 ping statistics --- 00:26:08.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.877 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:26:08.877 00:26:08.877 --- 10.0.0.1 ping statistics --- 00:26:08.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.877 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2620067 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2620067 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2620067 ']' 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:08.877 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:08.878 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.878 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:08.878 07:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.878 [2024-11-20 07:28:11.949107] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:08.878 [2024-11-20 07:28:11.950187] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:26:08.878 [2024-11-20 07:28:11.950249] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.878 [2024-11-20 07:28:12.022633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:08.878 [2024-11-20 07:28:12.077609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.878 [2024-11-20 07:28:12.077675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.878 [2024-11-20 07:28:12.077689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.878 [2024-11-20 07:28:12.077699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.878 [2024-11-20 07:28:12.077709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.878 [2024-11-20 07:28:12.079119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.878 [2024-11-20 07:28:12.079180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.878 [2024-11-20 07:28:12.079184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.878 [2024-11-20 07:28:12.166143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:08.878 [2024-11-20 07:28:12.166395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:08.878 [2024-11-20 07:28:12.166409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:08.878 [2024-11-20 07:28:12.166683] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:08.878 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:09.135 [2024-11-20 07:28:12.479940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.135 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:09.396 07:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.654 [2024-11-20 07:28:13.036205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.654 07:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:09.913 07:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:10.476 Malloc0 00:26:10.476 07:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:10.476 Delay0 00:26:10.476 07:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.043 07:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:11.302 NULL1 00:26:11.302 07:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:11.559 07:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2620480 00:26:11.559 07:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:11.559 07:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:11.559 07:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:12.933 Read completed with error (sct=0, sc=11) 00:26:12.933 07:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.191 07:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:13.191 07:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:13.449 true 00:26:13.449 07:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:13.449 07:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:14.383 07:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.383 07:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:14.383 07:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:14.641 true 00:26:14.641 07:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:14.641 07:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.898 07:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:15.156 07:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:15.156 07:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:15.413 true 00:26:15.413 07:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:15.413 07:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:16.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:16.345 07:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:16.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:16.602 07:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:16.602 07:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:16.860 true 00:26:16.860 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:16.860 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.118 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.375 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:17.375 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:17.633 true 00:26:17.633 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:17.633 07:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.891 07:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:18.148 07:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:18.148 07:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:18.406 true 00:26:18.406 07:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:18.406 07:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.339 07:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.597 07:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:19.597 07:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:19.855 true 00:26:19.855 07:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:19.855 07:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.113 07:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.370 07:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:20.370 07:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:20.628 true 00:26:20.886 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:20.886 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.144 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.402 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:21.402 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:21.659 true 00:26:21.659 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:21.659 07:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:22.592 07:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:22.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:22.851 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:22.851 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:23.108 true 00:26:23.108 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:23.108 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.366 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.624 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:23.624 07:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:23.882 true 00:26:23.882 07:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:23.882 07:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:24.815 07:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:24.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:24.815 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:24.815 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:25.073 true 00:26:25.073 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:25.073 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.330 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.588 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:25.588 07:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:25.845 true 00:26:25.845 07:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:25.845 07:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.103 07:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:26.669 07:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:26.669 07:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:26.669 true 00:26:26.669 07:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:26.669 07:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:27.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:27.602 07:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.859 07:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:27.859 07:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:28.117 true 00:26:28.117 07:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:28.117 07:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.683 07:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.683 07:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:28.683 07:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:28.941 true 00:26:28.941 07:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:28.941 07:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.980 07:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.980 07:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:29.980 07:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:30.238 true 00:26:30.238 07:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:30.238 07:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.804 07:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.804 07:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:30.804 07:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:31.063 true 00:26:31.063 07:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:31.063 07:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.320 07:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.886 07:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:31.886 07:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:31.886 true 00:26:31.886 07:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:31.886 07:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:32.818 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:33.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:33.075 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:33.075 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:33.333 true 00:26:33.333 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:33.333 07:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.591 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.156 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:34.156 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:34.156 true 00:26:34.156 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:34.156 07:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.091 07:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:35.349 07:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:35.349 07:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:35.607 true 00:26:35.607 07:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:35.607 07:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.865 07:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.123 07:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:36.123 07:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:36.380 true 00:26:36.380 07:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:36.381 07:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.314 07:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:37.314 07:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:37.314 07:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:37.572 true 00:26:37.572 07:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:37.572 07:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.830 07:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.088 07:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:38.088 07:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:38.653 true 00:26:38.653 07:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:38.653 07:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.653 07:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.911 07:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:38.911 07:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:39.169 true 00:26:39.427 07:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:39.427 07:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.361 07:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.618 07:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:40.619 07:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:40.876 true 00:26:40.876 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:40.876 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.133 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.390 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:41.390 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:41.648 true 00:26:41.648 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:41.648 07:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.906 Initializing NVMe Controllers 00:26:41.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.906 Controller IO queue size 128, less than required. 00:26:41.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:41.906 Controller IO queue size 128, less than required. 00:26:41.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:41.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:41.906 Initialization complete. Launching workers. 00:26:41.906 ======================================================== 00:26:41.906 Latency(us) 00:26:41.906 Device Information : IOPS MiB/s Average min max 00:26:41.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 737.01 0.36 78647.05 3390.51 1013221.99 00:26:41.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9294.51 4.54 13771.90 1581.53 539947.93 00:26:41.906 ======================================================== 00:26:41.906 Total : 10031.52 4.90 18538.23 1581.53 1013221.99 00:26:41.906 00:26:41.906 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:42.163 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:42.163 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:42.419 true 00:26:42.419 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2620480 00:26:42.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2620480) - No such process 00:26:42.419 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2620480 00:26:42.419 07:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.676 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:42.934 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:42.934 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:42.934 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:42.934 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:42.934 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:43.191 null0 00:26:43.191 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.191 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.191 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:43.449 null1 00:26:43.449 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.449 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.449 07:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:43.707 null2 00:26:43.707 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.707 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.707 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:43.965 null3 00:26:43.965 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.965 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.965 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:44.223 null4 00:26:44.223 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.223 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.223 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:44.481 null5 00:26:44.739 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.739 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.740 07:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:44.998 null6 00:26:44.998 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.998 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.998 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:45.257 null7 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.257 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2624498 2624499 2624501 2624503 2624505 2624507 2624509 2624511 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.258 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:45.516 07:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:45.774 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.775 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:46.032 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.291 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:46.549 07:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:47.115 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:47.116 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:47.374 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:47.374 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:47.374 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:47.374 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:47.374 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:47.374 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.632 07:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:47.890 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.149 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:48.407 07:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.666 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:48.925 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:48.925 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:48.925 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.925 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:48.925 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:49.183 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:49.183 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:49.183 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.441 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:49.699 07:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.957 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:50.215 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:50.472 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.472 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.472 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:50.472 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.473 07:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:50.730 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:50.988 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.988 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.988 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.988 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.988 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.988 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.245 rmmod nvme_tcp 00:26:51.245 rmmod nvme_fabrics 00:26:51.245 rmmod nvme_keyring 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2620067 ']' 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2620067 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2620067 ']' 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2620067 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2620067 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2620067' 00:26:51.245 killing process with pid 2620067 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2620067 00:26:51.245 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2620067 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.503 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.504 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.504 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.504 07:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.406 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:53.406 00:26:53.406 real 0m47.407s 00:26:53.406 user 3m18.968s 00:26:53.406 sys 0m21.308s 00:26:53.406 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:53.406 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.406 ************************************ 00:26:53.406 END TEST nvmf_ns_hotplug_stress 00:26:53.406 ************************************ 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:53.665 ************************************ 00:26:53.665 START TEST nvmf_delete_subsystem 00:26:53.665 ************************************ 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:53.665 * Looking for test storage... 00:26:53.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:53.665 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:26:53.666 07:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:53.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.666 --rc genhtml_branch_coverage=1 00:26:53.666 --rc genhtml_function_coverage=1 00:26:53.666 --rc genhtml_legend=1 00:26:53.666 --rc geninfo_all_blocks=1 00:26:53.666 --rc geninfo_unexecuted_blocks=1 00:26:53.666 00:26:53.666 ' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:53.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.666 --rc genhtml_branch_coverage=1 00:26:53.666 --rc genhtml_function_coverage=1 00:26:53.666 --rc genhtml_legend=1 00:26:53.666 --rc geninfo_all_blocks=1 00:26:53.666 --rc geninfo_unexecuted_blocks=1 00:26:53.666 00:26:53.666 ' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:53.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.666 --rc genhtml_branch_coverage=1 00:26:53.666 --rc genhtml_function_coverage=1 00:26:53.666 --rc genhtml_legend=1 00:26:53.666 --rc geninfo_all_blocks=1 00:26:53.666 --rc geninfo_unexecuted_blocks=1 00:26:53.666 00:26:53.666 ' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:53.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.666 --rc genhtml_branch_coverage=1 00:26:53.666 --rc genhtml_function_coverage=1 00:26:53.666 --rc genhtml_legend=1 00:26:53.666 --rc geninfo_all_blocks=1 00:26:53.666 --rc geninfo_unexecuted_blocks=1 00:26:53.666 00:26:53.666 ' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:53.666 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:26:53.667 07:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.199 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:56.200 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:56.200 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:56.200 Found net devices under 0000:09:00.0: cvl_0_0 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:56.200 Found net devices under 0000:09:00.1: cvl_0_1 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:56.200 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:26:56.201 00:26:56.201 --- 10.0.0.2 ping statistics --- 00:26:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.201 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:26:56.201 00:26:56.201 --- 10.0.0.1 ping statistics --- 00:26:56.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.201 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2627265 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2627265 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2627265 ']' 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.201 [2024-11-20 07:28:59.268470] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:56.201 [2024-11-20 07:28:59.269553] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:26:56.201 [2024-11-20 07:28:59.269620] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.201 [2024-11-20 07:28:59.361056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:56.201 [2024-11-20 07:28:59.434640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.201 [2024-11-20 07:28:59.434704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.201 [2024-11-20 07:28:59.434742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.201 [2024-11-20 07:28:59.434765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.201 [2024-11-20 07:28:59.434785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.201 [2024-11-20 07:28:59.436514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.201 [2024-11-20 07:28:59.436523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.201 [2024-11-20 07:28:59.547979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:56.201 [2024-11-20 07:28:59.547992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:56.201 [2024-11-20 07:28:59.548346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.201 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.460 [2024-11-20 07:28:59.641355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.460 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.461 [2024-11-20 07:28:59.661556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.461 NULL1 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.461 Delay0 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2627370 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:56.461 07:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:56.461 [2024-11-20 07:28:59.746548] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:58.441 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:58.441 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.441 07:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 [2024-11-20 07:29:01.920016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f514a0 is same with the state(6) to be set 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 starting I/O failed: -6 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 [2024-11-20 07:29:01.922676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f880c00d4d0 is same with the state(6) to be set 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Write completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.700 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 [2024-11-20 07:29:01.923158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51860 is same with the state(6) to be set 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Read completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:58.701 Write completed with error (sct=0, sc=8) 00:26:59.635 [2024-11-20 07:29:02.883949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f529a0 is same with the state(6) to be set 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 [2024-11-20 07:29:02.921550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f880c00d020 is same with the state(6) to be set 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.635 Write completed with error (sct=0, sc=8) 00:26:59.635 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 [2024-11-20 07:29:02.924647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f512c0 is same with the state(6) to be set 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 [2024-11-20 07:29:02.924793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51680 is same with the state(6) to be set 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Read completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 Write completed with error (sct=0, sc=8) 00:26:59.636 [2024-11-20 07:29:02.925165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f880c00d800 is same with the state(6) to be set 00:26:59.636 Initializing NVMe Controllers 00:26:59.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.636 Controller IO queue size 128, less than required. 00:26:59.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:59.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:59.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:59.636 Initialization complete. Launching workers. 00:26:59.636 ======================================================== 00:26:59.636 Latency(us) 00:26:59.636 Device Information : IOPS MiB/s Average min max 00:26:59.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 153.48 0.07 940254.44 2479.22 1045063.94 00:26:59.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.92 0.08 911478.88 490.41 1013410.10 00:26:59.636 ======================================================== 00:26:59.636 Total : 316.40 0.15 925437.51 490.41 1045063.94 00:26:59.636 00:26:59.636 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.636 [2024-11-20 07:29:02.925976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f529a0 (9): Bad file descriptor 00:26:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:59.636 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:59.636 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2627370 00:26:59.636 07:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2627370 00:27:00.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2627370) - No such process 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2627370 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2627370 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2627370 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:00.203 [2024-11-20 07:29:03.445530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2627813 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:00.203 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:00.204 [2024-11-20 07:29:03.508978] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:00.769 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:00.769 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:00.769 07:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:01.335 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:01.335 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:01.335 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:01.592 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:01.592 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:01.592 07:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:02.157 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:02.157 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:02.157 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:02.721 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:02.721 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:02.721 07:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:03.287 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:03.287 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:03.287 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:03.287 Initializing NVMe Controllers 00:27:03.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.287 Controller IO queue size 128, less than required. 00:27:03.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:03.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:03.287 Initialization complete. Launching workers. 00:27:03.287 ======================================================== 00:27:03.287 Latency(us) 00:27:03.287 Device Information : IOPS MiB/s Average min max 00:27:03.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005435.61 1000213.02 1041913.33 00:27:03.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004737.27 1000191.10 1041123.82 00:27:03.287 ======================================================== 00:27:03.287 Total : 256.00 0.12 1005086.44 1000191.10 1041913.33 00:27:03.287 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2627813 00:27:03.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2627813) - No such process 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2627813 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:03.545 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:03.804 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:03.804 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:03.804 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:03.804 07:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:03.804 rmmod nvme_tcp 00:27:03.804 rmmod nvme_fabrics 00:27:03.804 rmmod nvme_keyring 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2627265 ']' 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2627265 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2627265 ']' 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2627265 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2627265 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2627265' 00:27:03.804 killing process with pid 2627265 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2627265 00:27:03.804 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2627265 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.064 07:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.975 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:05.975 00:27:05.975 real 0m12.467s 00:27:05.975 user 0m24.831s 00:27:05.975 sys 0m3.818s 00:27:05.975 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:05.975 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:05.975 ************************************ 00:27:05.975 END TEST nvmf_delete_subsystem 00:27:05.976 ************************************ 00:27:05.976 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:05.976 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:05.976 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:05.976 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:05.976 ************************************ 00:27:05.976 START TEST nvmf_host_management 00:27:05.976 ************************************ 00:27:05.976 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:06.236 * Looking for test storage... 00:27:06.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:06.236 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:06.236 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:27:06.236 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:06.236 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:06.236 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.236 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.237 --rc genhtml_branch_coverage=1 00:27:06.237 --rc genhtml_function_coverage=1 00:27:06.237 --rc genhtml_legend=1 00:27:06.237 --rc geninfo_all_blocks=1 00:27:06.237 --rc geninfo_unexecuted_blocks=1 00:27:06.237 00:27:06.237 ' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.237 --rc genhtml_branch_coverage=1 00:27:06.237 --rc genhtml_function_coverage=1 00:27:06.237 --rc genhtml_legend=1 00:27:06.237 --rc geninfo_all_blocks=1 00:27:06.237 --rc geninfo_unexecuted_blocks=1 00:27:06.237 00:27:06.237 ' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.237 --rc genhtml_branch_coverage=1 00:27:06.237 --rc genhtml_function_coverage=1 00:27:06.237 --rc genhtml_legend=1 00:27:06.237 --rc geninfo_all_blocks=1 00:27:06.237 --rc geninfo_unexecuted_blocks=1 00:27:06.237 00:27:06.237 ' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:06.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.237 --rc genhtml_branch_coverage=1 00:27:06.237 --rc genhtml_function_coverage=1 00:27:06.237 --rc genhtml_legend=1 00:27:06.237 --rc geninfo_all_blocks=1 00:27:06.237 --rc geninfo_unexecuted_blocks=1 00:27:06.237 00:27:06.237 ' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:06.237 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.238 07:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:08.142 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:08.142 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:08.142 Found net devices under 0000:09:00.0: cvl_0_0 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.142 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:08.143 Found net devices under 0000:09:00.1: cvl_0_1 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.143 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:08.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:27:08.401 00:27:08.401 --- 10.0.0.2 ping statistics --- 00:27:08.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.401 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:27:08.401 00:27:08.401 --- 10.0.0.1 ping statistics --- 00:27:08.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.401 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2630155 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2630155 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2630155 ']' 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:08.401 07:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.401 [2024-11-20 07:29:11.758001] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:08.401 [2024-11-20 07:29:11.759127] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:08.401 [2024-11-20 07:29:11.759195] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.660 [2024-11-20 07:29:11.832749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.660 [2024-11-20 07:29:11.891383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.660 [2024-11-20 07:29:11.891432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.660 [2024-11-20 07:29:11.891460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.660 [2024-11-20 07:29:11.891472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.660 [2024-11-20 07:29:11.891482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.660 [2024-11-20 07:29:11.893155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.660 [2024-11-20 07:29:11.893218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.660 [2024-11-20 07:29:11.893296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:08.660 [2024-11-20 07:29:11.893296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.660 [2024-11-20 07:29:11.981029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:08.660 [2024-11-20 07:29:11.981263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:08.660 [2024-11-20 07:29:11.981563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:08.660 [2024-11-20 07:29:11.982124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:08.660 [2024-11-20 07:29:11.982396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.660 [2024-11-20 07:29:12.033993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.660 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.660 Malloc0 00:27:08.919 [2024-11-20 07:29:12.098148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2630202 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2630202 /var/tmp/bdevperf.sock 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2630202 ']' 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:08.919 { 00:27:08.919 "params": { 00:27:08.919 "name": "Nvme$subsystem", 00:27:08.919 "trtype": "$TEST_TRANSPORT", 00:27:08.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.919 "adrfam": "ipv4", 00:27:08.919 "trsvcid": "$NVMF_PORT", 00:27:08.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.919 "hdgst": ${hdgst:-false}, 00:27:08.919 "ddgst": ${ddgst:-false} 00:27:08.919 }, 00:27:08.919 "method": "bdev_nvme_attach_controller" 00:27:08.919 } 00:27:08.919 EOF 00:27:08.919 )") 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:08.919 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:08.919 "params": { 00:27:08.919 "name": "Nvme0", 00:27:08.919 "trtype": "tcp", 00:27:08.919 "traddr": "10.0.0.2", 00:27:08.919 "adrfam": "ipv4", 00:27:08.919 "trsvcid": "4420", 00:27:08.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.919 "hdgst": false, 00:27:08.919 "ddgst": false 00:27:08.919 }, 00:27:08.919 "method": "bdev_nvme_attach_controller" 00:27:08.919 }' 00:27:08.919 [2024-11-20 07:29:12.174923] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:08.919 [2024-11-20 07:29:12.175008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630202 ] 00:27:08.919 [2024-11-20 07:29:12.248045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.919 [2024-11-20 07:29:12.309317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.484 Running I/O for 10 seconds... 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:09.485 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.743 07:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.743 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.743 [2024-11-20 07:29:13.009791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.743 [2024-11-20 07:29:13.009853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.009871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.743 [2024-11-20 07:29:13.009885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.009911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.743 [2024-11-20 07:29:13.009926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.009940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.743 [2024-11-20 07:29:13.009953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.009966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2318a40 is same with the state(6) to be set 00:27:09.743 [2024-11-20 07:29:13.010354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.743 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.743 [2024-11-20 07:29:13.010791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.743 [2024-11-20 07:29:13.010804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.010833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.010861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.010891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.010920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:09.744 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.010981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.010998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.744 [2024-11-20 07:29:13.011132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.744 [2024-11-20 07:29:13.011311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.744 [2024-11-20 07:29:13.011888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.744 [2024-11-20 07:29:13.011902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.011917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.011930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.011946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.011960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.011975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.011989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.012238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.745 [2024-11-20 07:29:13.012251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.745 [2024-11-20 07:29:13.013467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:09.745 task offset: 81920 on job bdev=Nvme0n1 fails 00:27:09.745 00:27:09.745 Latency(us) 00:27:09.745 [2024-11-20T06:29:13.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.745 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.745 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:09.745 Verification LBA range: start 0x0 length 0x400 00:27:09.745 Nvme0n1 : 0.40 1594.90 99.68 159.49 0.00 35430.31 2924.85 34564.17 00:27:09.745 [2024-11-20T06:29:13.178Z] =================================================================================================================== 00:27:09.745 [2024-11-20T06:29:13.178Z] Total : 1594.90 99.68 159.49 0.00 35430.31 2924.85 34564.17 00:27:09.745 [2024-11-20 07:29:13.015363] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:09.745 [2024-11-20 07:29:13.015392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2318a40 (9): Bad file descriptor 00:27:09.745 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.745 07:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:09.745 [2024-11-20 07:29:13.066049] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2630202 00:27:10.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2630202) - No such process 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:10.676 { 00:27:10.676 "params": { 00:27:10.676 "name": "Nvme$subsystem", 00:27:10.676 "trtype": "$TEST_TRANSPORT", 00:27:10.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.676 "adrfam": "ipv4", 00:27:10.676 "trsvcid": "$NVMF_PORT", 00:27:10.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.676 "hdgst": ${hdgst:-false}, 00:27:10.676 "ddgst": ${ddgst:-false} 00:27:10.676 }, 00:27:10.676 "method": "bdev_nvme_attach_controller" 00:27:10.676 } 00:27:10.676 EOF 00:27:10.676 )") 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:10.676 07:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:10.676 "params": { 00:27:10.676 "name": "Nvme0", 00:27:10.676 "trtype": "tcp", 00:27:10.676 "traddr": "10.0.0.2", 00:27:10.676 "adrfam": "ipv4", 00:27:10.676 "trsvcid": "4420", 00:27:10.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.676 "hdgst": false, 00:27:10.676 "ddgst": false 00:27:10.676 }, 00:27:10.676 "method": "bdev_nvme_attach_controller" 00:27:10.676 }' 00:27:10.676 [2024-11-20 07:29:14.068684] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:10.676 [2024-11-20 07:29:14.068777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2630474 ] 00:27:10.933 [2024-11-20 07:29:14.140055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.933 [2024-11-20 07:29:14.199497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.191 Running I/O for 1 seconds... 00:27:12.122 1664.00 IOPS, 104.00 MiB/s 00:27:12.122 Latency(us) 00:27:12.122 [2024-11-20T06:29:15.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.122 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.122 Verification LBA range: start 0x0 length 0x400 00:27:12.122 Nvme0n1 : 1.02 1698.74 106.17 0.00 0.00 37058.81 4538.97 33399.09 00:27:12.122 [2024-11-20T06:29:15.555Z] =================================================================================================================== 00:27:12.122 [2024-11-20T06:29:15.555Z] Total : 1698.74 106.17 0.00 0.00 37058.81 4538.97 33399.09 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.380 rmmod nvme_tcp 00:27:12.380 rmmod nvme_fabrics 00:27:12.380 rmmod nvme_keyring 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2630155 ']' 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2630155 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2630155 ']' 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2630155 00:27:12.380 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2630155 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2630155' 00:27:12.381 killing process with pid 2630155 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2630155 00:27:12.381 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2630155 00:27:12.640 [2024-11-20 07:29:15.947577] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.640 07:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:15.175 00:27:15.175 real 0m8.625s 00:27:15.175 user 0m17.326s 00:27:15.175 sys 0m3.742s 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:15.175 ************************************ 00:27:15.175 END TEST nvmf_host_management 00:27:15.175 ************************************ 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:15.175 ************************************ 00:27:15.175 START TEST nvmf_lvol 00:27:15.175 ************************************ 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:15.175 * Looking for test storage... 00:27:15.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:15.175 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:15.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.176 --rc genhtml_branch_coverage=1 00:27:15.176 --rc genhtml_function_coverage=1 00:27:15.176 --rc genhtml_legend=1 00:27:15.176 --rc geninfo_all_blocks=1 00:27:15.176 --rc geninfo_unexecuted_blocks=1 00:27:15.176 00:27:15.176 ' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:15.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.176 --rc genhtml_branch_coverage=1 00:27:15.176 --rc genhtml_function_coverage=1 00:27:15.176 --rc genhtml_legend=1 00:27:15.176 --rc geninfo_all_blocks=1 00:27:15.176 --rc geninfo_unexecuted_blocks=1 00:27:15.176 00:27:15.176 ' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:15.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.176 --rc genhtml_branch_coverage=1 00:27:15.176 --rc genhtml_function_coverage=1 00:27:15.176 --rc genhtml_legend=1 00:27:15.176 --rc geninfo_all_blocks=1 00:27:15.176 --rc geninfo_unexecuted_blocks=1 00:27:15.176 00:27:15.176 ' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:15.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.176 --rc genhtml_branch_coverage=1 00:27:15.176 --rc genhtml_function_coverage=1 00:27:15.176 --rc genhtml_legend=1 00:27:15.176 --rc geninfo_all_blocks=1 00:27:15.176 --rc geninfo_unexecuted_blocks=1 00:27:15.176 00:27:15.176 ' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.176 07:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:17.083 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:17.083 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:17.083 Found net devices under 0000:09:00.0: cvl_0_0 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:17.083 Found net devices under 0000:09:00.1: cvl_0_1 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.083 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:17.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:27:17.342 00:27:17.342 --- 10.0.0.2 ping statistics --- 00:27:17.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.342 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:27:17.342 00:27:17.342 --- 10.0.0.1 ping statistics --- 00:27:17.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.342 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:27:17.342 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2632676 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2632676 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2632676 ']' 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:17.343 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:17.343 [2024-11-20 07:29:20.659203] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:17.343 [2024-11-20 07:29:20.660337] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:17.343 [2024-11-20 07:29:20.660416] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.343 [2024-11-20 07:29:20.734728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:17.601 [2024-11-20 07:29:20.796267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.601 [2024-11-20 07:29:20.796321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.601 [2024-11-20 07:29:20.796350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.601 [2024-11-20 07:29:20.796378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.601 [2024-11-20 07:29:20.796390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.601 [2024-11-20 07:29:20.797824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.601 [2024-11-20 07:29:20.797879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.601 [2024-11-20 07:29:20.797882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.601 [2024-11-20 07:29:20.897010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:17.601 [2024-11-20 07:29:20.897249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:17.601 [2024-11-20 07:29:20.897250] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:17.601 [2024-11-20 07:29:20.897523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.601 07:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:17.860 [2024-11-20 07:29:21.198534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.860 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:18.118 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:18.118 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:18.686 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:18.686 07:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:18.686 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:19.251 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2c81a3bc-7fdf-4432-8956-9880a2b0c936 00:27:19.251 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c81a3bc-7fdf-4432-8956-9880a2b0c936 lvol 20 00:27:19.510 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3b3a8e4a-f64e-44b6-b9ec-e59c838ac023 00:27:19.510 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:19.769 07:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b3a8e4a-f64e-44b6-b9ec-e59c838ac023 00:27:20.027 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:20.285 [2024-11-20 07:29:23.482740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.285 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:20.544 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2633100 00:27:20.544 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:20.544 07:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:21.478 07:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3b3a8e4a-f64e-44b6-b9ec-e59c838ac023 MY_SNAPSHOT 00:27:21.737 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9cebff08-0bec-41c5-a5ff-a128b353ed7c 00:27:21.737 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3b3a8e4a-f64e-44b6-b9ec-e59c838ac023 30 00:27:21.995 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9cebff08-0bec-41c5-a5ff-a128b353ed7c MY_CLONE 00:27:22.562 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=70c4f8b1-b791-4cbd-aee0-59d7e453a669 00:27:22.562 07:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 70c4f8b1-b791-4cbd-aee0-59d7e453a669 00:27:23.129 07:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2633100 00:27:31.239 Initializing NVMe Controllers 00:27:31.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:31.239 Controller IO queue size 128, less than required. 00:27:31.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:31.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:31.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:31.239 Initialization complete. Launching workers. 00:27:31.239 ======================================================== 00:27:31.239 Latency(us) 00:27:31.239 Device Information : IOPS MiB/s Average min max 00:27:31.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10458.90 40.86 12240.00 7053.85 68135.25 00:27:31.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10305.80 40.26 12421.66 4323.14 80587.90 00:27:31.239 ======================================================== 00:27:31.239 Total : 20764.70 81.11 12330.16 4323.14 80587.90 00:27:31.239 00:27:31.239 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.239 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b3a8e4a-f64e-44b6-b9ec-e59c838ac023 00:27:31.497 07:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c81a3bc-7fdf-4432-8956-9880a2b0c936 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.755 rmmod nvme_tcp 00:27:31.755 rmmod nvme_fabrics 00:27:31.755 rmmod nvme_keyring 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2632676 ']' 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2632676 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2632676 ']' 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2632676 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2632676 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2632676' 00:27:31.755 killing process with pid 2632676 00:27:31.755 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2632676 00:27:31.756 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2632676 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.015 07:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.555 00:27:34.555 real 0m19.330s 00:27:34.555 user 0m56.680s 00:27:34.555 sys 0m7.706s 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:34.555 ************************************ 00:27:34.555 END TEST nvmf_lvol 00:27:34.555 ************************************ 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:34.555 ************************************ 00:27:34.555 START TEST nvmf_lvs_grow 00:27:34.555 ************************************ 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:34.555 * Looking for test storage... 00:27:34.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:34.555 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.556 --rc genhtml_branch_coverage=1 00:27:34.556 --rc genhtml_function_coverage=1 00:27:34.556 --rc genhtml_legend=1 00:27:34.556 --rc geninfo_all_blocks=1 00:27:34.556 --rc geninfo_unexecuted_blocks=1 00:27:34.556 00:27:34.556 ' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.556 --rc genhtml_branch_coverage=1 00:27:34.556 --rc genhtml_function_coverage=1 00:27:34.556 --rc genhtml_legend=1 00:27:34.556 --rc geninfo_all_blocks=1 00:27:34.556 --rc geninfo_unexecuted_blocks=1 00:27:34.556 00:27:34.556 ' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.556 --rc genhtml_branch_coverage=1 00:27:34.556 --rc genhtml_function_coverage=1 00:27:34.556 --rc genhtml_legend=1 00:27:34.556 --rc geninfo_all_blocks=1 00:27:34.556 --rc geninfo_unexecuted_blocks=1 00:27:34.556 00:27:34.556 ' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.556 --rc genhtml_branch_coverage=1 00:27:34.556 --rc genhtml_function_coverage=1 00:27:34.556 --rc genhtml_legend=1 00:27:34.556 --rc geninfo_all_blocks=1 00:27:34.556 --rc geninfo_unexecuted_blocks=1 00:27:34.556 00:27:34.556 ' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.556 07:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.473 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:36.474 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:36.474 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:36.474 Found net devices under 0000:09:00.0: cvl_0_0 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:36.474 Found net devices under 0000:09:00.1: cvl_0_1 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.474 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:36.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:36.733 00:27:36.733 --- 10.0.0.2 ping statistics --- 00:27:36.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.733 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:27:36.733 00:27:36.733 --- 10.0.0.1 ping statistics --- 00:27:36.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.733 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.733 07:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2636373 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2636373 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2636373 ']' 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:36.733 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:36.733 [2024-11-20 07:29:40.080815] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:36.733 [2024-11-20 07:29:40.081961] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:36.733 [2024-11-20 07:29:40.082025] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.733 [2024-11-20 07:29:40.156558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.051 [2024-11-20 07:29:40.218957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.051 [2024-11-20 07:29:40.219007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.051 [2024-11-20 07:29:40.219035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.051 [2024-11-20 07:29:40.219047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.051 [2024-11-20 07:29:40.219056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.051 [2024-11-20 07:29:40.219694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.051 [2024-11-20 07:29:40.314378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:37.051 [2024-11-20 07:29:40.314713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:37.051 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.051 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:27:37.051 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.051 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.052 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:37.052 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.052 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:37.341 [2024-11-20 07:29:40.624271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:37.342 ************************************ 00:27:37.342 START TEST lvs_grow_clean 00:27:37.342 ************************************ 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:37.342 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:37.600 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:37.600 07:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:37.858 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:37.858 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:37.858 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:38.116 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:38.116 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:38.116 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f lvol 150 00:27:38.373 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe93bd56-2893-42f3-86ac-be79dd087137 00:27:38.373 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:38.373 07:29:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:38.631 [2024-11-20 07:29:42.044181] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:38.631 [2024-11-20 07:29:42.044289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:38.631 true 00:27:38.631 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:38.631 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:39.197 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:39.197 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:39.197 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe93bd56-2893-42f3-86ac-be79dd087137 00:27:39.455 07:29:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:39.714 [2024-11-20 07:29:43.124498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.714 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2636813 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2636813 /var/tmp/bdevperf.sock 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2636813 ']' 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:39.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:39.972 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:40.231 [2024-11-20 07:29:43.446854] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:40.231 [2024-11-20 07:29:43.446942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2636813 ] 00:27:40.231 [2024-11-20 07:29:43.518094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.231 [2024-11-20 07:29:43.580271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.489 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.489 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:27:40.489 07:29:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:41.055 Nvme0n1 00:27:41.055 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:41.314 [ 00:27:41.314 { 00:27:41.314 "name": "Nvme0n1", 00:27:41.314 "aliases": [ 00:27:41.314 "fe93bd56-2893-42f3-86ac-be79dd087137" 00:27:41.314 ], 00:27:41.314 "product_name": "NVMe disk", 00:27:41.314 "block_size": 4096, 00:27:41.314 "num_blocks": 38912, 00:27:41.314 "uuid": "fe93bd56-2893-42f3-86ac-be79dd087137", 00:27:41.314 "numa_id": 0, 00:27:41.314 "assigned_rate_limits": { 00:27:41.314 "rw_ios_per_sec": 0, 00:27:41.314 "rw_mbytes_per_sec": 0, 00:27:41.314 "r_mbytes_per_sec": 0, 00:27:41.314 "w_mbytes_per_sec": 0 00:27:41.314 }, 00:27:41.314 "claimed": false, 00:27:41.314 "zoned": false, 00:27:41.314 "supported_io_types": { 00:27:41.314 "read": true, 00:27:41.314 "write": true, 00:27:41.314 "unmap": true, 00:27:41.314 "flush": true, 00:27:41.314 "reset": true, 00:27:41.314 "nvme_admin": true, 00:27:41.314 "nvme_io": true, 00:27:41.314 "nvme_io_md": false, 00:27:41.314 "write_zeroes": true, 00:27:41.314 "zcopy": false, 00:27:41.314 "get_zone_info": false, 00:27:41.314 "zone_management": false, 00:27:41.314 "zone_append": false, 00:27:41.314 "compare": true, 00:27:41.314 "compare_and_write": true, 00:27:41.314 "abort": true, 00:27:41.314 "seek_hole": false, 00:27:41.314 "seek_data": false, 00:27:41.314 "copy": true, 00:27:41.314 "nvme_iov_md": false 00:27:41.314 }, 00:27:41.314 "memory_domains": [ 00:27:41.314 { 00:27:41.314 "dma_device_id": "system", 00:27:41.314 "dma_device_type": 1 00:27:41.314 } 00:27:41.314 ], 00:27:41.314 "driver_specific": { 00:27:41.314 "nvme": [ 00:27:41.314 { 00:27:41.314 "trid": { 00:27:41.314 "trtype": "TCP", 00:27:41.314 "adrfam": "IPv4", 00:27:41.314 "traddr": "10.0.0.2", 00:27:41.314 "trsvcid": "4420", 00:27:41.314 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:41.314 }, 00:27:41.314 "ctrlr_data": { 00:27:41.314 "cntlid": 1, 00:27:41.314 "vendor_id": "0x8086", 00:27:41.314 "model_number": "SPDK bdev Controller", 00:27:41.314 "serial_number": "SPDK0", 00:27:41.314 "firmware_revision": "25.01", 00:27:41.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.314 "oacs": { 00:27:41.314 "security": 0, 00:27:41.314 "format": 0, 00:27:41.314 "firmware": 0, 00:27:41.314 "ns_manage": 0 00:27:41.314 }, 00:27:41.314 "multi_ctrlr": true, 00:27:41.314 "ana_reporting": false 00:27:41.314 }, 00:27:41.314 "vs": { 00:27:41.314 "nvme_version": "1.3" 00:27:41.314 }, 00:27:41.314 "ns_data": { 00:27:41.314 "id": 1, 00:27:41.314 "can_share": true 00:27:41.314 } 00:27:41.314 } 00:27:41.314 ], 00:27:41.314 "mp_policy": "active_passive" 00:27:41.314 } 00:27:41.314 } 00:27:41.314 ] 00:27:41.314 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2636948 00:27:41.314 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:41.314 07:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:41.314 Running I/O for 10 seconds... 00:27:42.248 Latency(us) 00:27:42.248 [2024-11-20T06:29:45.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:42.248 Nvme0n1 : 1.00 14639.00 57.18 0.00 0.00 0.00 0.00 0.00 00:27:42.248 [2024-11-20T06:29:45.681Z] =================================================================================================================== 00:27:42.248 [2024-11-20T06:29:45.681Z] Total : 14639.00 57.18 0.00 0.00 0.00 0.00 0.00 00:27:42.248 00:27:43.181 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:43.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:43.439 Nvme0n1 : 2.00 14876.00 58.11 0.00 0.00 0.00 0.00 0.00 00:27:43.439 [2024-11-20T06:29:46.872Z] =================================================================================================================== 00:27:43.439 [2024-11-20T06:29:46.872Z] Total : 14876.00 58.11 0.00 0.00 0.00 0.00 0.00 00:27:43.439 00:27:43.439 true 00:27:43.439 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:43.439 07:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:43.697 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:43.697 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:43.697 07:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2636948 00:27:44.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:44.261 Nvme0n1 : 3.00 14997.33 58.58 0.00 0.00 0.00 0.00 0.00 00:27:44.261 [2024-11-20T06:29:47.694Z] =================================================================================================================== 00:27:44.261 [2024-11-20T06:29:47.694Z] Total : 14997.33 58.58 0.00 0.00 0.00 0.00 0.00 00:27:44.261 00:27:45.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:45.634 Nvme0n1 : 4.00 15026.25 58.70 0.00 0.00 0.00 0.00 0.00 00:27:45.634 [2024-11-20T06:29:49.067Z] =================================================================================================================== 00:27:45.634 [2024-11-20T06:29:49.067Z] Total : 15026.25 58.70 0.00 0.00 0.00 0.00 0.00 00:27:45.634 00:27:46.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:46.568 Nvme0n1 : 5.00 15094.40 58.96 0.00 0.00 0.00 0.00 0.00 00:27:46.568 [2024-11-20T06:29:50.001Z] =================================================================================================================== 00:27:46.568 [2024-11-20T06:29:50.001Z] Total : 15094.40 58.96 0.00 0.00 0.00 0.00 0.00 00:27:46.568 00:27:47.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:47.501 Nvme0n1 : 6.00 15118.67 59.06 0.00 0.00 0.00 0.00 0.00 00:27:47.501 [2024-11-20T06:29:50.934Z] =================================================================================================================== 00:27:47.501 [2024-11-20T06:29:50.934Z] Total : 15118.67 59.06 0.00 0.00 0.00 0.00 0.00 00:27:47.501 00:27:48.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:48.436 Nvme0n1 : 7.00 15172.29 59.27 0.00 0.00 0.00 0.00 0.00 00:27:48.436 [2024-11-20T06:29:51.869Z] =================================================================================================================== 00:27:48.436 [2024-11-20T06:29:51.869Z] Total : 15172.29 59.27 0.00 0.00 0.00 0.00 0.00 00:27:48.436 00:27:49.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:49.372 Nvme0n1 : 8.00 15228.38 59.49 0.00 0.00 0.00 0.00 0.00 00:27:49.372 [2024-11-20T06:29:52.805Z] =================================================================================================================== 00:27:49.372 [2024-11-20T06:29:52.805Z] Total : 15228.38 59.49 0.00 0.00 0.00 0.00 0.00 00:27:49.372 00:27:50.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.306 Nvme0n1 : 9.00 15272.00 59.66 0.00 0.00 0.00 0.00 0.00 00:27:50.306 [2024-11-20T06:29:53.739Z] =================================================================================================================== 00:27:50.306 [2024-11-20T06:29:53.739Z] Total : 15272.00 59.66 0.00 0.00 0.00 0.00 0.00 00:27:50.306 00:27:51.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.682 Nvme0n1 : 10.00 15275.20 59.67 0.00 0.00 0.00 0.00 0.00 00:27:51.682 [2024-11-20T06:29:55.115Z] =================================================================================================================== 00:27:51.682 [2024-11-20T06:29:55.115Z] Total : 15275.20 59.67 0.00 0.00 0.00 0.00 0.00 00:27:51.682 00:27:51.682 00:27:51.682 Latency(us) 00:27:51.682 [2024-11-20T06:29:55.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.682 Nvme0n1 : 10.01 15280.16 59.69 0.00 0.00 8372.05 4393.34 18932.62 00:27:51.682 [2024-11-20T06:29:55.115Z] =================================================================================================================== 00:27:51.682 [2024-11-20T06:29:55.115Z] Total : 15280.16 59.69 0.00 0.00 8372.05 4393.34 18932.62 00:27:51.682 { 00:27:51.682 "results": [ 00:27:51.682 { 00:27:51.682 "job": "Nvme0n1", 00:27:51.682 "core_mask": "0x2", 00:27:51.682 "workload": "randwrite", 00:27:51.682 "status": "finished", 00:27:51.682 "queue_depth": 128, 00:27:51.682 "io_size": 4096, 00:27:51.682 "runtime": 10.009256, 00:27:51.682 "iops": 15280.15668697054, 00:27:51.682 "mibps": 59.68811205847867, 00:27:51.682 "io_failed": 0, 00:27:51.682 "io_timeout": 0, 00:27:51.682 "avg_latency_us": 8372.05430229272, 00:27:51.682 "min_latency_us": 4393.339259259259, 00:27:51.682 "max_latency_us": 18932.62222222222 00:27:51.682 } 00:27:51.682 ], 00:27:51.682 "core_count": 1 00:27:51.682 } 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2636813 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2636813 ']' 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2636813 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2636813 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2636813' 00:27:51.682 killing process with pid 2636813 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2636813 00:27:51.682 Received shutdown signal, test time was about 10.000000 seconds 00:27:51.682 00:27:51.682 Latency(us) 00:27:51.682 [2024-11-20T06:29:55.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.682 [2024-11-20T06:29:55.115Z] =================================================================================================================== 00:27:51.682 [2024-11-20T06:29:55.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2636813 00:27:51.682 07:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:51.942 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.510 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:52.510 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:52.510 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:52.510 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:52.510 07:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:52.769 [2024-11-20 07:29:56.196258] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.027 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:53.028 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:53.286 request: 00:27:53.286 { 00:27:53.286 "uuid": "960d5b95-9ee5-4d11-824c-30eeb97ffb2f", 00:27:53.286 "method": "bdev_lvol_get_lvstores", 00:27:53.286 "req_id": 1 00:27:53.286 } 00:27:53.286 Got JSON-RPC error response 00:27:53.286 response: 00:27:53.286 { 00:27:53.286 "code": -19, 00:27:53.286 "message": "No such device" 00:27:53.286 } 00:27:53.286 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:27:53.286 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:53.286 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:53.286 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:53.286 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:53.544 aio_bdev 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe93bd56-2893-42f3-86ac-be79dd087137 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=fe93bd56-2893-42f3-86ac-be79dd087137 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:53.544 07:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:53.802 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fe93bd56-2893-42f3-86ac-be79dd087137 -t 2000 00:27:54.061 [ 00:27:54.061 { 00:27:54.061 "name": "fe93bd56-2893-42f3-86ac-be79dd087137", 00:27:54.061 "aliases": [ 00:27:54.061 "lvs/lvol" 00:27:54.061 ], 00:27:54.061 "product_name": "Logical Volume", 00:27:54.061 "block_size": 4096, 00:27:54.061 "num_blocks": 38912, 00:27:54.061 "uuid": "fe93bd56-2893-42f3-86ac-be79dd087137", 00:27:54.061 "assigned_rate_limits": { 00:27:54.061 "rw_ios_per_sec": 0, 00:27:54.061 "rw_mbytes_per_sec": 0, 00:27:54.061 "r_mbytes_per_sec": 0, 00:27:54.061 "w_mbytes_per_sec": 0 00:27:54.061 }, 00:27:54.061 "claimed": false, 00:27:54.061 "zoned": false, 00:27:54.061 "supported_io_types": { 00:27:54.061 "read": true, 00:27:54.061 "write": true, 00:27:54.061 "unmap": true, 00:27:54.061 "flush": false, 00:27:54.061 "reset": true, 00:27:54.061 "nvme_admin": false, 00:27:54.061 "nvme_io": false, 00:27:54.061 "nvme_io_md": false, 00:27:54.061 "write_zeroes": true, 00:27:54.061 "zcopy": false, 00:27:54.061 "get_zone_info": false, 00:27:54.061 "zone_management": false, 00:27:54.061 "zone_append": false, 00:27:54.061 "compare": false, 00:27:54.061 "compare_and_write": false, 00:27:54.061 "abort": false, 00:27:54.061 "seek_hole": true, 00:27:54.061 "seek_data": true, 00:27:54.061 "copy": false, 00:27:54.061 "nvme_iov_md": false 00:27:54.061 }, 00:27:54.061 "driver_specific": { 00:27:54.061 "lvol": { 00:27:54.061 "lvol_store_uuid": "960d5b95-9ee5-4d11-824c-30eeb97ffb2f", 00:27:54.061 "base_bdev": "aio_bdev", 00:27:54.061 "thin_provision": false, 00:27:54.061 "num_allocated_clusters": 38, 00:27:54.061 "snapshot": false, 00:27:54.061 "clone": false, 00:27:54.061 "esnap_clone": false 00:27:54.061 } 00:27:54.061 } 00:27:54.061 } 00:27:54.061 ] 00:27:54.061 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:27:54.061 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:54.061 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:54.320 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:54.320 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:54.320 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:54.578 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:54.578 07:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fe93bd56-2893-42f3-86ac-be79dd087137 00:27:54.836 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 960d5b95-9ee5-4d11-824c-30eeb97ffb2f 00:27:55.095 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:55.353 00:27:55.353 real 0m18.073s 00:27:55.353 user 0m17.620s 00:27:55.353 sys 0m1.911s 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.353 ************************************ 00:27:55.353 END TEST lvs_grow_clean 00:27:55.353 ************************************ 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:55.353 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:55.610 ************************************ 00:27:55.610 START TEST lvs_grow_dirty 00:27:55.610 ************************************ 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:55.610 07:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:55.869 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:55.869 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:56.127 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa9d4276-4628-488a-9d1f-7d6a2a742411 00:27:56.127 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:27:56.127 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:56.384 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:56.384 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:56.384 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa9d4276-4628-488a-9d1f-7d6a2a742411 lvol 150 00:27:56.642 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ec61439-1b2e-407a-b53a-e6459865422d 00:27:56.642 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:56.642 07:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:56.901 [2024-11-20 07:30:00.208194] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:56.901 [2024-11-20 07:30:00.208292] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:56.901 true 00:27:56.901 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:27:56.901 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:57.159 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:57.159 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:57.418 07:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ec61439-1b2e-407a-b53a-e6459865422d 00:27:57.677 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:57.935 [2024-11-20 07:30:01.332442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.935 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.194 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2639068 00:27:58.194 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2639068 /var/tmp/bdevperf.sock 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2639068 ']' 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:58.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:58.452 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:58.452 [2024-11-20 07:30:01.671459] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:27:58.452 [2024-11-20 07:30:01.671543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639068 ] 00:27:58.452 [2024-11-20 07:30:01.736127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.452 [2024-11-20 07:30:01.794138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.711 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:58.711 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:58.711 07:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:58.969 Nvme0n1 00:27:59.227 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:59.486 [ 00:27:59.486 { 00:27:59.486 "name": "Nvme0n1", 00:27:59.486 "aliases": [ 00:27:59.486 "4ec61439-1b2e-407a-b53a-e6459865422d" 00:27:59.486 ], 00:27:59.486 "product_name": "NVMe disk", 00:27:59.486 "block_size": 4096, 00:27:59.486 "num_blocks": 38912, 00:27:59.486 "uuid": "4ec61439-1b2e-407a-b53a-e6459865422d", 00:27:59.486 "numa_id": 0, 00:27:59.486 "assigned_rate_limits": { 00:27:59.486 "rw_ios_per_sec": 0, 00:27:59.486 "rw_mbytes_per_sec": 0, 00:27:59.486 "r_mbytes_per_sec": 0, 00:27:59.486 "w_mbytes_per_sec": 0 00:27:59.486 }, 00:27:59.486 "claimed": false, 00:27:59.486 "zoned": false, 00:27:59.486 "supported_io_types": { 00:27:59.486 "read": true, 00:27:59.486 "write": true, 00:27:59.486 "unmap": true, 00:27:59.486 "flush": true, 00:27:59.486 "reset": true, 00:27:59.486 "nvme_admin": true, 00:27:59.486 "nvme_io": true, 00:27:59.486 "nvme_io_md": false, 00:27:59.486 "write_zeroes": true, 00:27:59.486 "zcopy": false, 00:27:59.486 "get_zone_info": false, 00:27:59.486 "zone_management": false, 00:27:59.486 "zone_append": false, 00:27:59.486 "compare": true, 00:27:59.486 "compare_and_write": true, 00:27:59.486 "abort": true, 00:27:59.486 "seek_hole": false, 00:27:59.486 "seek_data": false, 00:27:59.486 "copy": true, 00:27:59.486 "nvme_iov_md": false 00:27:59.486 }, 00:27:59.486 "memory_domains": [ 00:27:59.486 { 00:27:59.486 "dma_device_id": "system", 00:27:59.486 "dma_device_type": 1 00:27:59.486 } 00:27:59.486 ], 00:27:59.486 "driver_specific": { 00:27:59.486 "nvme": [ 00:27:59.486 { 00:27:59.486 "trid": { 00:27:59.486 "trtype": "TCP", 00:27:59.486 "adrfam": "IPv4", 00:27:59.486 "traddr": "10.0.0.2", 00:27:59.486 "trsvcid": "4420", 00:27:59.486 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:59.486 }, 00:27:59.486 "ctrlr_data": { 00:27:59.486 "cntlid": 1, 00:27:59.486 "vendor_id": "0x8086", 00:27:59.486 "model_number": "SPDK bdev Controller", 00:27:59.486 "serial_number": "SPDK0", 00:27:59.486 "firmware_revision": "25.01", 00:27:59.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.486 "oacs": { 00:27:59.486 "security": 0, 00:27:59.486 "format": 0, 00:27:59.486 "firmware": 0, 00:27:59.486 "ns_manage": 0 00:27:59.486 }, 00:27:59.486 "multi_ctrlr": true, 00:27:59.486 "ana_reporting": false 00:27:59.486 }, 00:27:59.486 "vs": { 00:27:59.486 "nvme_version": "1.3" 00:27:59.486 }, 00:27:59.486 "ns_data": { 00:27:59.486 "id": 1, 00:27:59.486 "can_share": true 00:27:59.486 } 00:27:59.486 } 00:27:59.486 ], 00:27:59.486 "mp_policy": "active_passive" 00:27:59.486 } 00:27:59.486 } 00:27:59.486 ] 00:27:59.486 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2639227 00:27:59.486 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:59.487 07:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:59.487 Running I/O for 10 seconds... 00:28:00.447 Latency(us) 00:28:00.447 [2024-11-20T06:30:03.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:00.447 Nvme0n1 : 1.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:28:00.447 [2024-11-20T06:30:03.880Z] =================================================================================================================== 00:28:00.447 [2024-11-20T06:30:03.880Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:28:00.447 00:28:01.382 07:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:01.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:01.382 Nvme0n1 : 2.00 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:28:01.382 [2024-11-20T06:30:04.815Z] =================================================================================================================== 00:28:01.382 [2024-11-20T06:30:04.815Z] Total : 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:28:01.382 00:28:01.640 true 00:28:01.640 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:01.640 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:01.897 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:01.897 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:01.897 07:30:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2639227 00:28:02.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:02.463 Nvme0n1 : 3.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:28:02.463 [2024-11-20T06:30:05.896Z] =================================================================================================================== 00:28:02.463 [2024-11-20T06:30:05.896Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:28:02.463 00:28:03.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:03.397 Nvme0n1 : 4.00 14954.25 58.42 0.00 0.00 0.00 0.00 0.00 00:28:03.397 [2024-11-20T06:30:06.830Z] =================================================================================================================== 00:28:03.397 [2024-11-20T06:30:06.830Z] Total : 14954.25 58.42 0.00 0.00 0.00 0.00 0.00 00:28:03.397 00:28:04.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:04.770 Nvme0n1 : 5.00 15011.40 58.64 0.00 0.00 0.00 0.00 0.00 00:28:04.770 [2024-11-20T06:30:08.203Z] =================================================================================================================== 00:28:04.770 [2024-11-20T06:30:08.203Z] Total : 15011.40 58.64 0.00 0.00 0.00 0.00 0.00 00:28:04.770 00:28:05.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.704 Nvme0n1 : 6.00 15028.33 58.70 0.00 0.00 0.00 0.00 0.00 00:28:05.704 [2024-11-20T06:30:09.137Z] =================================================================================================================== 00:28:05.704 [2024-11-20T06:30:09.137Z] Total : 15028.33 58.70 0.00 0.00 0.00 0.00 0.00 00:28:05.704 00:28:06.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:06.635 Nvme0n1 : 7.00 15067.71 58.86 0.00 0.00 0.00 0.00 0.00 00:28:06.635 [2024-11-20T06:30:10.068Z] =================================================================================================================== 00:28:06.635 [2024-11-20T06:30:10.068Z] Total : 15067.71 58.86 0.00 0.00 0.00 0.00 0.00 00:28:06.635 00:28:07.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.613 Nvme0n1 : 8.00 15073.38 58.88 0.00 0.00 0.00 0.00 0.00 00:28:07.613 [2024-11-20T06:30:11.046Z] =================================================================================================================== 00:28:07.613 [2024-11-20T06:30:11.046Z] Total : 15073.38 58.88 0.00 0.00 0.00 0.00 0.00 00:28:07.613 00:28:08.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.566 Nvme0n1 : 9.00 15106.00 59.01 0.00 0.00 0.00 0.00 0.00 00:28:08.566 [2024-11-20T06:30:12.000Z] =================================================================================================================== 00:28:08.567 [2024-11-20T06:30:12.000Z] Total : 15106.00 59.01 0.00 0.00 0.00 0.00 0.00 00:28:08.567 00:28:09.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.501 Nvme0n1 : 10.00 15132.10 59.11 0.00 0.00 0.00 0.00 0.00 00:28:09.501 [2024-11-20T06:30:12.934Z] =================================================================================================================== 00:28:09.501 [2024-11-20T06:30:12.934Z] Total : 15132.10 59.11 0.00 0.00 0.00 0.00 0.00 00:28:09.501 00:28:09.501 00:28:09.501 Latency(us) 00:28:09.501 [2024-11-20T06:30:12.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.501 Nvme0n1 : 10.01 15130.21 59.10 0.00 0.00 8454.01 4611.79 19126.80 00:28:09.501 [2024-11-20T06:30:12.934Z] =================================================================================================================== 00:28:09.501 [2024-11-20T06:30:12.934Z] Total : 15130.21 59.10 0.00 0.00 8454.01 4611.79 19126.80 00:28:09.501 { 00:28:09.501 "results": [ 00:28:09.501 { 00:28:09.501 "job": "Nvme0n1", 00:28:09.501 "core_mask": "0x2", 00:28:09.501 "workload": "randwrite", 00:28:09.501 "status": "finished", 00:28:09.501 "queue_depth": 128, 00:28:09.501 "io_size": 4096, 00:28:09.501 "runtime": 10.005547, 00:28:09.501 "iops": 15130.207274025099, 00:28:09.501 "mibps": 59.10237216416054, 00:28:09.501 "io_failed": 0, 00:28:09.501 "io_timeout": 0, 00:28:09.501 "avg_latency_us": 8454.014560923732, 00:28:09.501 "min_latency_us": 4611.792592592593, 00:28:09.501 "max_latency_us": 19126.802962962964 00:28:09.501 } 00:28:09.501 ], 00:28:09.501 "core_count": 1 00:28:09.501 } 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2639068 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2639068 ']' 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2639068 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2639068 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2639068' 00:28:09.501 killing process with pid 2639068 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2639068 00:28:09.501 Received shutdown signal, test time was about 10.000000 seconds 00:28:09.501 00:28:09.501 Latency(us) 00:28:09.501 [2024-11-20T06:30:12.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.501 [2024-11-20T06:30:12.934Z] =================================================================================================================== 00:28:09.501 [2024-11-20T06:30:12.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.501 07:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2639068 00:28:09.760 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.019 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.588 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:10.588 07:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2636373 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2636373 00:28:10.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2636373 Killed "${NVMF_APP[@]}" "$@" 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2641055 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2641055 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2641055 ']' 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:10.847 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:10.847 [2024-11-20 07:30:14.138941] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:10.847 [2024-11-20 07:30:14.140101] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:10.847 [2024-11-20 07:30:14.140169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.847 [2024-11-20 07:30:14.216862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.847 [2024-11-20 07:30:14.274858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.847 [2024-11-20 07:30:14.274911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.847 [2024-11-20 07:30:14.274939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.847 [2024-11-20 07:30:14.274950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.847 [2024-11-20 07:30:14.274959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.847 [2024-11-20 07:30:14.275558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.106 [2024-11-20 07:30:14.377478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:11.106 [2024-11-20 07:30:14.377803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.106 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:11.365 [2024-11-20 07:30:14.726436] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:11.365 [2024-11-20 07:30:14.726576] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:11.365 [2024-11-20 07:30:14.726626] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4ec61439-1b2e-407a-b53a-e6459865422d 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=4ec61439-1b2e-407a-b53a-e6459865422d 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:11.365 07:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:11.623 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ec61439-1b2e-407a-b53a-e6459865422d -t 2000 00:28:12.189 [ 00:28:12.189 { 00:28:12.189 "name": "4ec61439-1b2e-407a-b53a-e6459865422d", 00:28:12.189 "aliases": [ 00:28:12.189 "lvs/lvol" 00:28:12.189 ], 00:28:12.189 "product_name": "Logical Volume", 00:28:12.189 "block_size": 4096, 00:28:12.189 "num_blocks": 38912, 00:28:12.189 "uuid": "4ec61439-1b2e-407a-b53a-e6459865422d", 00:28:12.189 "assigned_rate_limits": { 00:28:12.189 "rw_ios_per_sec": 0, 00:28:12.189 "rw_mbytes_per_sec": 0, 00:28:12.189 "r_mbytes_per_sec": 0, 00:28:12.189 "w_mbytes_per_sec": 0 00:28:12.189 }, 00:28:12.189 "claimed": false, 00:28:12.189 "zoned": false, 00:28:12.189 "supported_io_types": { 00:28:12.189 "read": true, 00:28:12.189 "write": true, 00:28:12.189 "unmap": true, 00:28:12.189 "flush": false, 00:28:12.189 "reset": true, 00:28:12.189 "nvme_admin": false, 00:28:12.189 "nvme_io": false, 00:28:12.189 "nvme_io_md": false, 00:28:12.189 "write_zeroes": true, 00:28:12.189 "zcopy": false, 00:28:12.189 "get_zone_info": false, 00:28:12.189 "zone_management": false, 00:28:12.189 "zone_append": false, 00:28:12.189 "compare": false, 00:28:12.189 "compare_and_write": false, 00:28:12.189 "abort": false, 00:28:12.189 "seek_hole": true, 00:28:12.189 "seek_data": true, 00:28:12.189 "copy": false, 00:28:12.189 "nvme_iov_md": false 00:28:12.189 }, 00:28:12.189 "driver_specific": { 00:28:12.189 "lvol": { 00:28:12.189 "lvol_store_uuid": "aa9d4276-4628-488a-9d1f-7d6a2a742411", 00:28:12.189 "base_bdev": "aio_bdev", 00:28:12.189 "thin_provision": false, 00:28:12.189 "num_allocated_clusters": 38, 00:28:12.189 "snapshot": false, 00:28:12.189 "clone": false, 00:28:12.189 "esnap_clone": false 00:28:12.189 } 00:28:12.189 } 00:28:12.189 } 00:28:12.189 ] 00:28:12.189 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:28:12.189 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:12.189 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:12.189 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:12.189 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:12.189 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:12.756 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:12.756 07:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:12.756 [2024-11-20 07:30:16.148135] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:13.014 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:13.014 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:13.015 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:13.272 request: 00:28:13.272 { 00:28:13.272 "uuid": "aa9d4276-4628-488a-9d1f-7d6a2a742411", 00:28:13.272 "method": "bdev_lvol_get_lvstores", 00:28:13.272 "req_id": 1 00:28:13.272 } 00:28:13.272 Got JSON-RPC error response 00:28:13.272 response: 00:28:13.272 { 00:28:13.272 "code": -19, 00:28:13.272 "message": "No such device" 00:28:13.272 } 00:28:13.272 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:28:13.272 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:13.272 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:13.272 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:13.272 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:13.530 aio_bdev 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ec61439-1b2e-407a-b53a-e6459865422d 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=4ec61439-1b2e-407a-b53a-e6459865422d 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:13.530 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:13.788 07:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ec61439-1b2e-407a-b53a-e6459865422d -t 2000 00:28:14.047 [ 00:28:14.047 { 00:28:14.047 "name": "4ec61439-1b2e-407a-b53a-e6459865422d", 00:28:14.047 "aliases": [ 00:28:14.047 "lvs/lvol" 00:28:14.047 ], 00:28:14.047 "product_name": "Logical Volume", 00:28:14.047 "block_size": 4096, 00:28:14.047 "num_blocks": 38912, 00:28:14.047 "uuid": "4ec61439-1b2e-407a-b53a-e6459865422d", 00:28:14.047 "assigned_rate_limits": { 00:28:14.047 "rw_ios_per_sec": 0, 00:28:14.047 "rw_mbytes_per_sec": 0, 00:28:14.047 "r_mbytes_per_sec": 0, 00:28:14.047 "w_mbytes_per_sec": 0 00:28:14.047 }, 00:28:14.047 "claimed": false, 00:28:14.047 "zoned": false, 00:28:14.047 "supported_io_types": { 00:28:14.047 "read": true, 00:28:14.047 "write": true, 00:28:14.047 "unmap": true, 00:28:14.047 "flush": false, 00:28:14.047 "reset": true, 00:28:14.047 "nvme_admin": false, 00:28:14.047 "nvme_io": false, 00:28:14.047 "nvme_io_md": false, 00:28:14.047 "write_zeroes": true, 00:28:14.047 "zcopy": false, 00:28:14.047 "get_zone_info": false, 00:28:14.047 "zone_management": false, 00:28:14.047 "zone_append": false, 00:28:14.047 "compare": false, 00:28:14.047 "compare_and_write": false, 00:28:14.047 "abort": false, 00:28:14.047 "seek_hole": true, 00:28:14.047 "seek_data": true, 00:28:14.047 "copy": false, 00:28:14.047 "nvme_iov_md": false 00:28:14.047 }, 00:28:14.047 "driver_specific": { 00:28:14.047 "lvol": { 00:28:14.047 "lvol_store_uuid": "aa9d4276-4628-488a-9d1f-7d6a2a742411", 00:28:14.047 "base_bdev": "aio_bdev", 00:28:14.047 "thin_provision": false, 00:28:14.047 "num_allocated_clusters": 38, 00:28:14.047 "snapshot": false, 00:28:14.047 "clone": false, 00:28:14.047 "esnap_clone": false 00:28:14.047 } 00:28:14.047 } 00:28:14.047 } 00:28:14.047 ] 00:28:14.047 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:28:14.047 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:14.047 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:14.305 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:14.305 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:14.305 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:14.564 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:14.564 07:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ec61439-1b2e-407a-b53a-e6459865422d 00:28:14.822 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa9d4276-4628-488a-9d1f-7d6a2a742411 00:28:15.080 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:15.339 00:28:15.339 real 0m19.886s 00:28:15.339 user 0m36.790s 00:28:15.339 sys 0m4.787s 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:15.339 ************************************ 00:28:15.339 END TEST lvs_grow_dirty 00:28:15.339 ************************************ 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:15.339 nvmf_trace.0 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.339 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.339 rmmod nvme_tcp 00:28:15.339 rmmod nvme_fabrics 00:28:15.598 rmmod nvme_keyring 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2641055 ']' 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2641055 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2641055 ']' 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2641055 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2641055 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2641055' 00:28:15.598 killing process with pid 2641055 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2641055 00:28:15.598 07:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2641055 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.857 07:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.764 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:17.764 00:28:17.764 real 0m43.635s 00:28:17.764 user 0m56.290s 00:28:17.764 sys 0m8.815s 00:28:17.764 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:17.764 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:17.764 ************************************ 00:28:17.764 END TEST nvmf_lvs_grow 00:28:17.764 ************************************ 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:17.765 ************************************ 00:28:17.765 START TEST nvmf_bdev_io_wait 00:28:17.765 ************************************ 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:17.765 * Looking for test storage... 00:28:17.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:28:17.765 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.024 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.024 --rc genhtml_branch_coverage=1 00:28:18.024 --rc genhtml_function_coverage=1 00:28:18.025 --rc genhtml_legend=1 00:28:18.025 --rc geninfo_all_blocks=1 00:28:18.025 --rc geninfo_unexecuted_blocks=1 00:28:18.025 00:28:18.025 ' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:18.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.025 --rc genhtml_branch_coverage=1 00:28:18.025 --rc genhtml_function_coverage=1 00:28:18.025 --rc genhtml_legend=1 00:28:18.025 --rc geninfo_all_blocks=1 00:28:18.025 --rc geninfo_unexecuted_blocks=1 00:28:18.025 00:28:18.025 ' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:18.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.025 --rc genhtml_branch_coverage=1 00:28:18.025 --rc genhtml_function_coverage=1 00:28:18.025 --rc genhtml_legend=1 00:28:18.025 --rc geninfo_all_blocks=1 00:28:18.025 --rc geninfo_unexecuted_blocks=1 00:28:18.025 00:28:18.025 ' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:18.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.025 --rc genhtml_branch_coverage=1 00:28:18.025 --rc genhtml_function_coverage=1 00:28:18.025 --rc genhtml_legend=1 00:28:18.025 --rc geninfo_all_blocks=1 00:28:18.025 --rc geninfo_unexecuted_blocks=1 00:28:18.025 00:28:18.025 ' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.025 07:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:20.563 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:20.563 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:20.563 Found net devices under 0000:09:00.0: cvl_0_0 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:20.563 Found net devices under 0000:09:00.1: cvl_0_1 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:20.563 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:20.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:28:20.564 00:28:20.564 --- 10.0.0.2 ping statistics --- 00:28:20.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.564 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:20.564 00:28:20.564 --- 10.0.0.1 ping statistics --- 00:28:20.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.564 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2643583 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2643583 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2643583 ']' 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.564 [2024-11-20 07:30:23.624757] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:20.564 [2024-11-20 07:30:23.625875] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:20.564 [2024-11-20 07:30:23.625943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.564 [2024-11-20 07:30:23.703029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.564 [2024-11-20 07:30:23.765152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.564 [2024-11-20 07:30:23.765203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.564 [2024-11-20 07:30:23.765231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.564 [2024-11-20 07:30:23.765243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.564 [2024-11-20 07:30:23.765252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.564 [2024-11-20 07:30:23.766821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.564 [2024-11-20 07:30:23.766880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.564 [2024-11-20 07:30:23.766948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.564 [2024-11-20 07:30:23.766951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.564 [2024-11-20 07:30:23.767459] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.564 [2024-11-20 07:30:23.956403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:20.564 [2024-11-20 07:30:23.956420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:20.564 [2024-11-20 07:30:23.957178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:20.564 [2024-11-20 07:30:23.958000] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.564 [2024-11-20 07:30:23.963691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.564 07:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.823 Malloc0 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:20.823 [2024-11-20 07:30:24.023845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2643605 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2643607 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.823 { 00:28:20.823 "params": { 00:28:20.823 "name": "Nvme$subsystem", 00:28:20.823 "trtype": "$TEST_TRANSPORT", 00:28:20.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.823 "adrfam": "ipv4", 00:28:20.823 "trsvcid": "$NVMF_PORT", 00:28:20.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.823 "hdgst": ${hdgst:-false}, 00:28:20.823 "ddgst": ${ddgst:-false} 00:28:20.823 }, 00:28:20.823 "method": "bdev_nvme_attach_controller" 00:28:20.823 } 00:28:20.823 EOF 00:28:20.823 )") 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2643609 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.823 { 00:28:20.823 "params": { 00:28:20.823 "name": "Nvme$subsystem", 00:28:20.823 "trtype": "$TEST_TRANSPORT", 00:28:20.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.823 "adrfam": "ipv4", 00:28:20.823 "trsvcid": "$NVMF_PORT", 00:28:20.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.823 "hdgst": ${hdgst:-false}, 00:28:20.823 "ddgst": ${ddgst:-false} 00:28:20.823 }, 00:28:20.823 "method": "bdev_nvme_attach_controller" 00:28:20.823 } 00:28:20.823 EOF 00:28:20.823 )") 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2643612 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.823 { 00:28:20.823 "params": { 00:28:20.823 "name": "Nvme$subsystem", 00:28:20.823 "trtype": "$TEST_TRANSPORT", 00:28:20.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.823 "adrfam": "ipv4", 00:28:20.823 "trsvcid": "$NVMF_PORT", 00:28:20.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.823 "hdgst": ${hdgst:-false}, 00:28:20.823 "ddgst": ${ddgst:-false} 00:28:20.823 }, 00:28:20.823 "method": "bdev_nvme_attach_controller" 00:28:20.823 } 00:28:20.823 EOF 00:28:20.823 )") 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:20.823 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:20.823 { 00:28:20.823 "params": { 00:28:20.823 "name": "Nvme$subsystem", 00:28:20.823 "trtype": "$TEST_TRANSPORT", 00:28:20.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.824 "adrfam": "ipv4", 00:28:20.824 "trsvcid": "$NVMF_PORT", 00:28:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.824 "hdgst": ${hdgst:-false}, 00:28:20.824 "ddgst": ${ddgst:-false} 00:28:20.824 }, 00:28:20.824 "method": "bdev_nvme_attach_controller" 00:28:20.824 } 00:28:20.824 EOF 00:28:20.824 )") 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2643605 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.824 "params": { 00:28:20.824 "name": "Nvme1", 00:28:20.824 "trtype": "tcp", 00:28:20.824 "traddr": "10.0.0.2", 00:28:20.824 "adrfam": "ipv4", 00:28:20.824 "trsvcid": "4420", 00:28:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.824 "hdgst": false, 00:28:20.824 "ddgst": false 00:28:20.824 }, 00:28:20.824 "method": "bdev_nvme_attach_controller" 00:28:20.824 }' 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.824 "params": { 00:28:20.824 "name": "Nvme1", 00:28:20.824 "trtype": "tcp", 00:28:20.824 "traddr": "10.0.0.2", 00:28:20.824 "adrfam": "ipv4", 00:28:20.824 "trsvcid": "4420", 00:28:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.824 "hdgst": false, 00:28:20.824 "ddgst": false 00:28:20.824 }, 00:28:20.824 "method": "bdev_nvme_attach_controller" 00:28:20.824 }' 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.824 "params": { 00:28:20.824 "name": "Nvme1", 00:28:20.824 "trtype": "tcp", 00:28:20.824 "traddr": "10.0.0.2", 00:28:20.824 "adrfam": "ipv4", 00:28:20.824 "trsvcid": "4420", 00:28:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.824 "hdgst": false, 00:28:20.824 "ddgst": false 00:28:20.824 }, 00:28:20.824 "method": "bdev_nvme_attach_controller" 00:28:20.824 }' 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:20.824 07:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:20.824 "params": { 00:28:20.824 "name": "Nvme1", 00:28:20.824 "trtype": "tcp", 00:28:20.824 "traddr": "10.0.0.2", 00:28:20.824 "adrfam": "ipv4", 00:28:20.824 "trsvcid": "4420", 00:28:20.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.824 "hdgst": false, 00:28:20.824 "ddgst": false 00:28:20.824 }, 00:28:20.824 "method": "bdev_nvme_attach_controller" 00:28:20.824 }' 00:28:20.824 [2024-11-20 07:30:24.074903] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:20.824 [2024-11-20 07:30:24.074904] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:20.824 [2024-11-20 07:30:24.074903] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:20.824 [2024-11-20 07:30:24.074904] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:20.824 [2024-11-20 07:30:24.074989] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:30:24.074990] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:30:24.074990] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 07:30:24.074991] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:20.824 --proc-type=auto ] 00:28:20.824 --proc-type=auto ] 00:28:20.824 --proc-type=auto ] 00:28:21.082 [2024-11-20 07:30:24.260516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.082 [2024-11-20 07:30:24.317099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:21.082 [2024-11-20 07:30:24.366950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.082 [2024-11-20 07:30:24.422932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:21.082 [2024-11-20 07:30:24.469036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.340 [2024-11-20 07:30:24.526465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:21.340 [2024-11-20 07:30:24.575784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.340 [2024-11-20 07:30:24.630527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:21.340 Running I/O for 1 seconds... 00:28:21.340 Running I/O for 1 seconds... 00:28:21.598 Running I/O for 1 seconds... 00:28:21.598 Running I/O for 1 seconds... 00:28:22.532 8315.00 IOPS, 32.48 MiB/s [2024-11-20T06:30:25.965Z] 8345.00 IOPS, 32.60 MiB/s 00:28:22.532 Latency(us) 00:28:22.532 [2024-11-20T06:30:25.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.532 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:22.532 Nvme1n1 : 1.01 8383.99 32.75 0.00 0.00 15203.32 5218.61 17185.00 00:28:22.532 [2024-11-20T06:30:25.965Z] =================================================================================================================== 00:28:22.532 [2024-11-20T06:30:25.965Z] Total : 8383.99 32.75 0.00 0.00 15203.32 5218.61 17185.00 00:28:22.532 00:28:22.532 Latency(us) 00:28:22.532 [2024-11-20T06:30:25.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.532 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:22.532 Nvme1n1 : 1.01 8401.22 32.82 0.00 0.00 15164.12 4781.70 18932.62 00:28:22.532 [2024-11-20T06:30:25.965Z] =================================================================================================================== 00:28:22.532 [2024-11-20T06:30:25.965Z] Total : 8401.22 32.82 0.00 0.00 15164.12 4781.70 18932.62 00:28:22.532 10048.00 IOPS, 39.25 MiB/s 00:28:22.532 Latency(us) 00:28:22.532 [2024-11-20T06:30:25.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.532 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:22.532 Nvme1n1 : 1.01 10126.58 39.56 0.00 0.00 12598.50 2645.71 18641.35 00:28:22.532 [2024-11-20T06:30:25.965Z] =================================================================================================================== 00:28:22.532 [2024-11-20T06:30:25.965Z] Total : 10126.58 39.56 0.00 0.00 12598.50 2645.71 18641.35 00:28:22.532 07:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2643607 00:28:22.532 181472.00 IOPS, 708.88 MiB/s 00:28:22.532 Latency(us) 00:28:22.532 [2024-11-20T06:30:25.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.532 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:22.532 Nvme1n1 : 1.00 181137.83 707.57 0.00 0.00 702.92 285.20 1844.72 00:28:22.532 [2024-11-20T06:30:25.965Z] =================================================================================================================== 00:28:22.532 [2024-11-20T06:30:25.965Z] Total : 181137.83 707.57 0.00 0.00 702.92 285.20 1844.72 00:28:22.532 07:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2643609 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2643612 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.790 rmmod nvme_tcp 00:28:22.790 rmmod nvme_fabrics 00:28:22.790 rmmod nvme_keyring 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2643583 ']' 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2643583 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2643583 ']' 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2643583 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2643583 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2643583' 00:28:22.790 killing process with pid 2643583 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2643583 00:28:22.790 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2643583 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.049 07:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.587 00:28:25.587 real 0m7.324s 00:28:25.587 user 0m14.624s 00:28:25.587 sys 0m4.065s 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:25.587 ************************************ 00:28:25.587 END TEST nvmf_bdev_io_wait 00:28:25.587 ************************************ 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:25.587 ************************************ 00:28:25.587 START TEST nvmf_queue_depth 00:28:25.587 ************************************ 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:25.587 * Looking for test storage... 00:28:25.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:25.587 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:25.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.588 --rc genhtml_branch_coverage=1 00:28:25.588 --rc genhtml_function_coverage=1 00:28:25.588 --rc genhtml_legend=1 00:28:25.588 --rc geninfo_all_blocks=1 00:28:25.588 --rc geninfo_unexecuted_blocks=1 00:28:25.588 00:28:25.588 ' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:25.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.588 --rc genhtml_branch_coverage=1 00:28:25.588 --rc genhtml_function_coverage=1 00:28:25.588 --rc genhtml_legend=1 00:28:25.588 --rc geninfo_all_blocks=1 00:28:25.588 --rc geninfo_unexecuted_blocks=1 00:28:25.588 00:28:25.588 ' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:25.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.588 --rc genhtml_branch_coverage=1 00:28:25.588 --rc genhtml_function_coverage=1 00:28:25.588 --rc genhtml_legend=1 00:28:25.588 --rc geninfo_all_blocks=1 00:28:25.588 --rc geninfo_unexecuted_blocks=1 00:28:25.588 00:28:25.588 ' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:25.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.588 --rc genhtml_branch_coverage=1 00:28:25.588 --rc genhtml_function_coverage=1 00:28:25.588 --rc genhtml_legend=1 00:28:25.588 --rc geninfo_all_blocks=1 00:28:25.588 --rc geninfo_unexecuted_blocks=1 00:28:25.588 00:28:25.588 ' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.588 07:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.493 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:27.494 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:27.494 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:27.494 Found net devices under 0000:09:00.0: cvl_0_0 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:27.494 Found net devices under 0000:09:00.1: cvl_0_1 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:28:27.494 00:28:27.494 --- 10.0.0.2 ping statistics --- 00:28:27.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.494 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:28:27.494 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:28:27.494 00:28:27.495 --- 10.0.0.1 ping statistics --- 00:28:27.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.495 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2645834 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2645834 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2645834 ']' 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:27.495 07:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:27.495 [2024-11-20 07:30:30.915616] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:27.495 [2024-11-20 07:30:30.916683] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:27.495 [2024-11-20 07:30:30.916749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.753 [2024-11-20 07:30:30.993274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.753 [2024-11-20 07:30:31.053025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.753 [2024-11-20 07:30:31.053074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.753 [2024-11-20 07:30:31.053103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.753 [2024-11-20 07:30:31.053114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.753 [2024-11-20 07:30:31.053124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.753 [2024-11-20 07:30:31.053741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.753 [2024-11-20 07:30:31.143901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:27.753 [2024-11-20 07:30:31.144196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:27.753 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:27.753 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:28:27.753 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:27.753 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:27.753 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 [2024-11-20 07:30:31.194400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 Malloc0 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 [2024-11-20 07:30:31.250457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2645969 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2645969 /var/tmp/bdevperf.sock 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2645969 ']' 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.011 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.011 [2024-11-20 07:30:31.297522] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:28.011 [2024-11-20 07:30:31.297598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2645969 ] 00:28:28.011 [2024-11-20 07:30:31.364106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.011 [2024-11-20 07:30:31.422070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.269 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:28.269 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:28:28.270 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:28.270 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.270 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.270 NVMe0n1 00:28:28.270 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.270 07:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:28.528 Running I/O for 10 seconds... 00:28:30.397 8182.00 IOPS, 31.96 MiB/s [2024-11-20T06:30:34.767Z] 8182.50 IOPS, 31.96 MiB/s [2024-11-20T06:30:36.140Z] 8192.33 IOPS, 32.00 MiB/s [2024-11-20T06:30:37.075Z] 8192.25 IOPS, 32.00 MiB/s [2024-11-20T06:30:38.010Z] 8203.00 IOPS, 32.04 MiB/s [2024-11-20T06:30:38.945Z] 8261.17 IOPS, 32.27 MiB/s [2024-11-20T06:30:39.878Z] 8294.86 IOPS, 32.40 MiB/s [2024-11-20T06:30:40.814Z] 8282.75 IOPS, 32.35 MiB/s [2024-11-20T06:30:41.749Z] 8269.78 IOPS, 32.30 MiB/s [2024-11-20T06:30:42.007Z] 8290.90 IOPS, 32.39 MiB/s 00:28:38.574 Latency(us) 00:28:38.574 [2024-11-20T06:30:42.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.574 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:38.574 Verification LBA range: start 0x0 length 0x4000 00:28:38.574 NVMe0n1 : 10.10 8305.01 32.44 0.00 0.00 122778.60 21262.79 72235.24 00:28:38.574 [2024-11-20T06:30:42.007Z] =================================================================================================================== 00:28:38.574 [2024-11-20T06:30:42.007Z] Total : 8305.01 32.44 0.00 0.00 122778.60 21262.79 72235.24 00:28:38.574 { 00:28:38.574 "results": [ 00:28:38.574 { 00:28:38.574 "job": "NVMe0n1", 00:28:38.574 "core_mask": "0x1", 00:28:38.574 "workload": "verify", 00:28:38.574 "status": "finished", 00:28:38.574 "verify_range": { 00:28:38.574 "start": 0, 00:28:38.574 "length": 16384 00:28:38.574 }, 00:28:38.574 "queue_depth": 1024, 00:28:38.574 "io_size": 4096, 00:28:38.574 "runtime": 10.102215, 00:28:38.574 "iops": 8305.01033684197, 00:28:38.574 "mibps": 32.44144662828894, 00:28:38.574 "io_failed": 0, 00:28:38.574 "io_timeout": 0, 00:28:38.574 "avg_latency_us": 122778.5962025416, 00:28:38.574 "min_latency_us": 21262.79111111111, 00:28:38.574 "max_latency_us": 72235.23555555556 00:28:38.574 } 00:28:38.574 ], 00:28:38.574 "core_count": 1 00:28:38.574 } 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2645969 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2645969 ']' 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2645969 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2645969 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2645969' 00:28:38.574 killing process with pid 2645969 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2645969 00:28:38.574 Received shutdown signal, test time was about 10.000000 seconds 00:28:38.574 00:28:38.574 Latency(us) 00:28:38.574 [2024-11-20T06:30:42.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.574 [2024-11-20T06:30:42.007Z] =================================================================================================================== 00:28:38.574 [2024-11-20T06:30:42.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.574 07:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2645969 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.832 rmmod nvme_tcp 00:28:38.832 rmmod nvme_fabrics 00:28:38.832 rmmod nvme_keyring 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2645834 ']' 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2645834 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2645834 ']' 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2645834 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2645834 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2645834' 00:28:38.832 killing process with pid 2645834 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2645834 00:28:38.832 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2645834 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.091 07:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.669 00:28:41.669 real 0m16.020s 00:28:41.669 user 0m21.199s 00:28:41.669 sys 0m3.770s 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:41.669 ************************************ 00:28:41.669 END TEST nvmf_queue_depth 00:28:41.669 ************************************ 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:41.669 ************************************ 00:28:41.669 START TEST nvmf_target_multipath 00:28:41.669 ************************************ 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:41.669 * Looking for test storage... 00:28:41.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:41.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.669 --rc genhtml_branch_coverage=1 00:28:41.669 --rc genhtml_function_coverage=1 00:28:41.669 --rc genhtml_legend=1 00:28:41.669 --rc geninfo_all_blocks=1 00:28:41.669 --rc geninfo_unexecuted_blocks=1 00:28:41.669 00:28:41.669 ' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:41.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.669 --rc genhtml_branch_coverage=1 00:28:41.669 --rc genhtml_function_coverage=1 00:28:41.669 --rc genhtml_legend=1 00:28:41.669 --rc geninfo_all_blocks=1 00:28:41.669 --rc geninfo_unexecuted_blocks=1 00:28:41.669 00:28:41.669 ' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:41.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.669 --rc genhtml_branch_coverage=1 00:28:41.669 --rc genhtml_function_coverage=1 00:28:41.669 --rc genhtml_legend=1 00:28:41.669 --rc geninfo_all_blocks=1 00:28:41.669 --rc geninfo_unexecuted_blocks=1 00:28:41.669 00:28:41.669 ' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:41.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.669 --rc genhtml_branch_coverage=1 00:28:41.669 --rc genhtml_function_coverage=1 00:28:41.669 --rc genhtml_legend=1 00:28:41.669 --rc geninfo_all_blocks=1 00:28:41.669 --rc geninfo_unexecuted_blocks=1 00:28:41.669 00:28:41.669 ' 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.669 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.670 07:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:43.596 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.596 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.596 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.596 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:43.597 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:43.597 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:43.597 Found net devices under 0000:09:00.0: cvl_0_0 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:43.597 Found net devices under 0000:09:00.1: cvl_0_1 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.597 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:28:43.598 00:28:43.598 --- 10.0.0.2 ping statistics --- 00:28:43.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.598 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:43.598 00:28:43.598 --- 10.0.0.1 ping statistics --- 00:28:43.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.598 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.598 07:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:43.598 only one NIC for nvmf test 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.598 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.598 rmmod nvme_tcp 00:28:43.856 rmmod nvme_fabrics 00:28:43.856 rmmod nvme_keyring 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.856 07:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:45.759 00:28:45.759 real 0m4.558s 00:28:45.759 user 0m0.908s 00:28:45.759 sys 0m1.663s 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 ************************************ 00:28:45.759 END TEST nvmf_target_multipath 00:28:45.759 ************************************ 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:45.759 ************************************ 00:28:45.759 START TEST nvmf_zcopy 00:28:45.759 ************************************ 00:28:45.759 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:46.018 * Looking for test storage... 00:28:46.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:46.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.018 --rc genhtml_branch_coverage=1 00:28:46.018 --rc genhtml_function_coverage=1 00:28:46.018 --rc genhtml_legend=1 00:28:46.018 --rc geninfo_all_blocks=1 00:28:46.018 --rc geninfo_unexecuted_blocks=1 00:28:46.018 00:28:46.018 ' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:46.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.018 --rc genhtml_branch_coverage=1 00:28:46.018 --rc genhtml_function_coverage=1 00:28:46.018 --rc genhtml_legend=1 00:28:46.018 --rc geninfo_all_blocks=1 00:28:46.018 --rc geninfo_unexecuted_blocks=1 00:28:46.018 00:28:46.018 ' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:46.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.018 --rc genhtml_branch_coverage=1 00:28:46.018 --rc genhtml_function_coverage=1 00:28:46.018 --rc genhtml_legend=1 00:28:46.018 --rc geninfo_all_blocks=1 00:28:46.018 --rc geninfo_unexecuted_blocks=1 00:28:46.018 00:28:46.018 ' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:46.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.018 --rc genhtml_branch_coverage=1 00:28:46.018 --rc genhtml_function_coverage=1 00:28:46.018 --rc genhtml_legend=1 00:28:46.018 --rc geninfo_all_blocks=1 00:28:46.018 --rc geninfo_unexecuted_blocks=1 00:28:46.018 00:28:46.018 ' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.018 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.019 07:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:48.552 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:48.552 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.552 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:48.553 Found net devices under 0000:09:00.0: cvl_0_0 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:48.553 Found net devices under 0000:09:00.1: cvl_0_1 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:28:48.553 00:28:48.553 --- 10.0.0.2 ping statistics --- 00:28:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.553 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:28:48.553 00:28:48.553 --- 10.0.0.1 ping statistics --- 00:28:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.553 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2651036 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2651036 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2651036 ']' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.553 [2024-11-20 07:30:51.617839] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.553 [2024-11-20 07:30:51.618949] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:48.553 [2024-11-20 07:30:51.619012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.553 [2024-11-20 07:30:51.694366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.553 [2024-11-20 07:30:51.754828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.553 [2024-11-20 07:30:51.754886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.553 [2024-11-20 07:30:51.754899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.553 [2024-11-20 07:30:51.754910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.553 [2024-11-20 07:30:51.754920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.553 [2024-11-20 07:30:51.755469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.553 [2024-11-20 07:30:51.841184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.553 [2024-11-20 07:30:51.841513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.553 [2024-11-20 07:30:51.892034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.553 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.554 [2024-11-20 07:30:51.908191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.554 malloc0 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:48.554 { 00:28:48.554 "params": { 00:28:48.554 "name": "Nvme$subsystem", 00:28:48.554 "trtype": "$TEST_TRANSPORT", 00:28:48.554 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.554 "adrfam": "ipv4", 00:28:48.554 "trsvcid": "$NVMF_PORT", 00:28:48.554 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.554 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.554 "hdgst": ${hdgst:-false}, 00:28:48.554 "ddgst": ${ddgst:-false} 00:28:48.554 }, 00:28:48.554 "method": "bdev_nvme_attach_controller" 00:28:48.554 } 00:28:48.554 EOF 00:28:48.554 )") 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:48.554 07:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:48.554 "params": { 00:28:48.554 "name": "Nvme1", 00:28:48.554 "trtype": "tcp", 00:28:48.554 "traddr": "10.0.0.2", 00:28:48.554 "adrfam": "ipv4", 00:28:48.554 "trsvcid": "4420", 00:28:48.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:48.554 "hdgst": false, 00:28:48.554 "ddgst": false 00:28:48.554 }, 00:28:48.554 "method": "bdev_nvme_attach_controller" 00:28:48.554 }' 00:28:48.812 [2024-11-20 07:30:51.987751] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:48.812 [2024-11-20 07:30:51.987856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2651177 ] 00:28:48.812 [2024-11-20 07:30:52.055621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.812 [2024-11-20 07:30:52.112990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.069 Running I/O for 10 seconds... 00:28:50.934 5608.00 IOPS, 43.81 MiB/s [2024-11-20T06:30:55.741Z] 5674.00 IOPS, 44.33 MiB/s [2024-11-20T06:30:56.676Z] 5657.33 IOPS, 44.20 MiB/s [2024-11-20T06:30:57.623Z] 5690.75 IOPS, 44.46 MiB/s [2024-11-20T06:30:58.556Z] 5689.60 IOPS, 44.45 MiB/s [2024-11-20T06:30:59.492Z] 5697.83 IOPS, 44.51 MiB/s [2024-11-20T06:31:00.426Z] 5694.00 IOPS, 44.48 MiB/s [2024-11-20T06:31:01.360Z] 5694.25 IOPS, 44.49 MiB/s [2024-11-20T06:31:02.738Z] 5691.33 IOPS, 44.46 MiB/s [2024-11-20T06:31:02.738Z] 5688.60 IOPS, 44.44 MiB/s 00:28:59.305 Latency(us) 00:28:59.305 [2024-11-20T06:31:02.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.305 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:59.305 Verification LBA range: start 0x0 length 0x1000 00:28:59.305 Nvme1n1 : 10.02 5691.32 44.46 0.00 0.00 22429.22 2924.85 29321.29 00:28:59.305 [2024-11-20T06:31:02.738Z] =================================================================================================================== 00:28:59.305 [2024-11-20T06:31:02.738Z] Total : 5691.32 44.46 0.00 0.00 22429.22 2924.85 29321.29 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2652356 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.305 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.305 { 00:28:59.305 "params": { 00:28:59.305 "name": "Nvme$subsystem", 00:28:59.305 "trtype": "$TEST_TRANSPORT", 00:28:59.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.305 "adrfam": "ipv4", 00:28:59.305 "trsvcid": "$NVMF_PORT", 00:28:59.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.306 "hdgst": ${hdgst:-false}, 00:28:59.306 "ddgst": ${ddgst:-false} 00:28:59.306 }, 00:28:59.306 "method": "bdev_nvme_attach_controller" 00:28:59.306 } 00:28:59.306 EOF 00:28:59.306 )") 00:28:59.306 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:59.306 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:59.306 [2024-11-20 07:31:02.587978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.588015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:59.306 07:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.306 "params": { 00:28:59.306 "name": "Nvme1", 00:28:59.306 "trtype": "tcp", 00:28:59.306 "traddr": "10.0.0.2", 00:28:59.306 "adrfam": "ipv4", 00:28:59.306 "trsvcid": "4420", 00:28:59.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.306 "hdgst": false, 00:28:59.306 "ddgst": false 00:28:59.306 }, 00:28:59.306 "method": "bdev_nvme_attach_controller" 00:28:59.306 }' 00:28:59.306 [2024-11-20 07:31:02.595915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.595943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.603914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.603935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.611910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.611930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.619909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.619929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.625155] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:28:59.306 [2024-11-20 07:31:02.625226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2652356 ] 00:28:59.306 [2024-11-20 07:31:02.627911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.627930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.635909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.635928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.643908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.643927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.651910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.651930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.659911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.659931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.667915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.667939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.675912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.675936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.683912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.683931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.691909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.691928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.693752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.306 [2024-11-20 07:31:02.699923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.699946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.707941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.707973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.715910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.715929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.723911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.723931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.306 [2024-11-20 07:31:02.731928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.306 [2024-11-20 07:31:02.731948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.565 [2024-11-20 07:31:02.739911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.565 [2024-11-20 07:31:02.739930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.565 [2024-11-20 07:31:02.747911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.565 [2024-11-20 07:31:02.747930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.565 [2024-11-20 07:31:02.755912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.565 [2024-11-20 07:31:02.755931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.565 [2024-11-20 07:31:02.755939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.565 [2024-11-20 07:31:02.763913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.565 [2024-11-20 07:31:02.763933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.771931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.771960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.779939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.779975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.787946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.787982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.795938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.795971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.803933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.803965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.811932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.811963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.819914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.819935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.827934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.827957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.835929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.835959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.843931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.843961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.851927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.851955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.859910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.859929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.867909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.867928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.875916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.875940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.883914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.883936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.891914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.891936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.899915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.899937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.907910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.907930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.915909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.915928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.923909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.923927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.931910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.931929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.939943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.939964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.947929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.947950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.955925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.955950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.963910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.963930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.971914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.971938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 [2024-11-20 07:31:02.979916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.979939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.566 Running I/O for 5 seconds... 00:28:59.566 [2024-11-20 07:31:02.996401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.566 [2024-11-20 07:31:02.996438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.824 [2024-11-20 07:31:03.012413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.824 [2024-11-20 07:31:03.012441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.022630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.022669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.039596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.039622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.049925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.049950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.064751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.064776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.074821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.074846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.090146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.090171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.105491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.105520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.124139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.124165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.135061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.135097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.146613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.146653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.162686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.162711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.178332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.178375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.193674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.193702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.211845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.211871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.221923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.221950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.238131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.238156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.825 [2024-11-20 07:31:03.254217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.825 [2024-11-20 07:31:03.254245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.272358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.272407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.288467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.288510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.298724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.298750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.313408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.313435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.331870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.331895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.341926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.341950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.355680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.355707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.083 [2024-11-20 07:31:03.365252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.083 [2024-11-20 07:31:03.365277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.380918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.380956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.390940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.390965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.407040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.407067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.416838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.416862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.433120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.433144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.452871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.452895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.464319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.464345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.475891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.475916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.487373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.487399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.500495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.500537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.084 [2024-11-20 07:31:03.510121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.084 [2024-11-20 07:31:03.510158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.524906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.524939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.536065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.536103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.546793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.546819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.561097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.561123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.571408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.571433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.583294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.583329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.594888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.594913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.610240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.610267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.625460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.625488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.643864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.643889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.654638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.654677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.667968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.667994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.677978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.678018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.692651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.692676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.702662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.702689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.717917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.717942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.734949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.734976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.744878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.744903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.760633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.760658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.342 [2024-11-20 07:31:03.769865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.342 [2024-11-20 07:31:03.769899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.786414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.786440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.801628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.801656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.820098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.820124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.829926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.829951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.845462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.845489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.864094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.864119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.874520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.874547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.888493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.888518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.898130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.898168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.914157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.914183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.930555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.930581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.946282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.946316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.962380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.962423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.972933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.972957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:03.984824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:03.984847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 11299.00 IOPS, 88.27 MiB/s [2024-11-20T06:31:04.033Z] [2024-11-20 07:31:04.000958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:04.000984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:04.010242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:04.010267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.600 [2024-11-20 07:31:04.024564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.600 [2024-11-20 07:31:04.024590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.034015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.034042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.048465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.048489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.057451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.057477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.073878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.073902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.092352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.092391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.102312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.102351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.116194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.116219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.125911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.125937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.142319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.142344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.157441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.157469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.167362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.167400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.179146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.179170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.189471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.189498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.206300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.206332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.222570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.222596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.238135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.238160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.254043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.254070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.272616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.272668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.859 [2024-11-20 07:31:04.282282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.859 [2024-11-20 07:31:04.282328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.296886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.296913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.306890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.306931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.321920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.321945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.336103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.336130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.345689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.345715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.362218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.362244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.376863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.376905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.386963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.386989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.401316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.401340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.410653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.410677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.425190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.425217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.441449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.441491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.460559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.460584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.481115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.481153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.496704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.496731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.506495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.506521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.520725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.520750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.530590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.530615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.118 [2024-11-20 07:31:04.546274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.118 [2024-11-20 07:31:04.546298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.562436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.562462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.578588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.578636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.588906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.588930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.605516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.605541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.615483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.615510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.627563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.627590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.638600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.638625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.655127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.655154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.665086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.665110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.681163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.681188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.690763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.690787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.705184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.705207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.715147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.715172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.727093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.727117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.739427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.739455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.748922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.748946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.765341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.765366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.775050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.775074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.790383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.790415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.377 [2024-11-20 07:31:04.806382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.377 [2024-11-20 07:31:04.806409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.822296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.822330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.837392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.837420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.847315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.847341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.859424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.859462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.871027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.871053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.886755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.886795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.901478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.901506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.911421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.911447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.923097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.923122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.934441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.934468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.948831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.948871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.958637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.958663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:04.974116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.974140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 11367.00 IOPS, 88.80 MiB/s [2024-11-20T06:31:05.069Z] [2024-11-20 07:31:04.990746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:04.990772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:05.005279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:05.005316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:05.015459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:05.015486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:05.027393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:05.027420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:05.038393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:05.038427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.636 [2024-11-20 07:31:05.054619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.636 [2024-11-20 07:31:05.054644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.070539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.070564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.086689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.086714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.102291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.102341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.117618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.117645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.136233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.136259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.146861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.146886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.161250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.161273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.893 [2024-11-20 07:31:05.170977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.893 [2024-11-20 07:31:05.171002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.186399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.186425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.202394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.202420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.217271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.217323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.226863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.226888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.241782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.241806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.259831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.259859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.269710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.269737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.285660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.285699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.295702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.295728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.310991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.311038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.894 [2024-11-20 07:31:05.323776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.894 [2024-11-20 07:31:05.323803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.333421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.333448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.349362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.349388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.359118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.359143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.373611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.373637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.390399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.390442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.405329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.405368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.415407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.415434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.427681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.427706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.438680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.438731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.453244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.453285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.463225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.463250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.475214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.475241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.486529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.486554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.501008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.501033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.511148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.511172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.525819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.525861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.543717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.543742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.554002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.554033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.569658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.569698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.152 [2024-11-20 07:31:05.579655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.152 [2024-11-20 07:31:05.579696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.591943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.591983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.603087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.603111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.616786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.616812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.626367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.626394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.640976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.641001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.650609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.650650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.665842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.665866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.681651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.681678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.700134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.410 [2024-11-20 07:31:05.700159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.410 [2024-11-20 07:31:05.709695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.709721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.722177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.722201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.738597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.738638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.753704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.753731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.772518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.772544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.783484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.783509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.794667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.794692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.805875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.805916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.821713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.821741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.411 [2024-11-20 07:31:05.839948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.411 [2024-11-20 07:31:05.839977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.669 [2024-11-20 07:31:05.850029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.669 [2024-11-20 07:31:05.850072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.669 [2024-11-20 07:31:05.862107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.862132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.878413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.878441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.893962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.894005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.911343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.911384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.920859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.920884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.932438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.932465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.943074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.943099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.954056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.954081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.968419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.968446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.978029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.978055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:05.990124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:05.990149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 11374.00 IOPS, 88.86 MiB/s [2024-11-20T06:31:06.103Z] [2024-11-20 07:31:06.006715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.006742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.022047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.022089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.031691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.031718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.043484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.043512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.057030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.057057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.066187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.066212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.081964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.081990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.670 [2024-11-20 07:31:06.091723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.670 [2024-11-20 07:31:06.091749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.103969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.103997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.114665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.114690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.129707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.129735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.139431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.139457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.151503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.151530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.161837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.161862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.177796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.177837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.187775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.187800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.200216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.200242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.211286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.211322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.222480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.222507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.236998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.237038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.246784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.246810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.261091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.928 [2024-11-20 07:31:06.261117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.928 [2024-11-20 07:31:06.270375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.270408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.929 [2024-11-20 07:31:06.282420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.282448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.929 [2024-11-20 07:31:06.298263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.298290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.929 [2024-11-20 07:31:06.308776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.308801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.929 [2024-11-20 07:31:06.320756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.320796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.929 [2024-11-20 07:31:06.331204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.331228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.929 [2024-11-20 07:31:06.346230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.929 [2024-11-20 07:31:06.346255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.362079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.362106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.371963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.371988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.384595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.384621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.396065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.396090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.407318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.407344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.418443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.418470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.434703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.434729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.449676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.449703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.459055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.459084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.473348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.473374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.483045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.483070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.497540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.497566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.507522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.507564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.519699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.519725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.530597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.530647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.545799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.545827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.563883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.563912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.573528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.573567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.585939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.585964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.600560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.600588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.187 [2024-11-20 07:31:06.609903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.187 [2024-11-20 07:31:06.609929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.621746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.621774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.637608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.637636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.647022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.647048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.661532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.661560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.671504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.671532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.683827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.683857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.695087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.695113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.706551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.706592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.722384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.722411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.735045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.735073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.744914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.744948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.757186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.757211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.768713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.768738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.779787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.779812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.790633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.790674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.804484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.804511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.814484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.814510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.830763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.830804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.845294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.845332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.855411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.855438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.446 [2024-11-20 07:31:06.867462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.446 [2024-11-20 07:31:06.867499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.878374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.878415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.893886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.893913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.904384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.904411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.916345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.916371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.926849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.926888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.941691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.941718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.951248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.951273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.962914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.962938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.977194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.977244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:06.986693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:06.986716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 11402.00 IOPS, 89.08 MiB/s [2024-11-20T06:31:07.138Z] [2024-11-20 07:31:07.001511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.001538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.011089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.011115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.023723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.023749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.035139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.035164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.050153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.050180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.068667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.068693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.078367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.078394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.093017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.093042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.103724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.103748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.114755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.114793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.705 [2024-11-20 07:31:07.126087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.705 [2024-11-20 07:31:07.126111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.963 [2024-11-20 07:31:07.142180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.963 [2024-11-20 07:31:07.142206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.963 [2024-11-20 07:31:07.158272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.963 [2024-11-20 07:31:07.158323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.963 [2024-11-20 07:31:07.174069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.174097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.183795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.183820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.195851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.195891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.207330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.207356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.218549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.218590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.232712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.232737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.242747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.242771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.257187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.257212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.266850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.266875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.281399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.281441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.291182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.291206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.303149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.303188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.318462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.318490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.333367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.333410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.343215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.343254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.355674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.355699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.366613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.366653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.964 [2024-11-20 07:31:07.382162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.964 [2024-11-20 07:31:07.382201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.397145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.397172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.407065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.407089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.419228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.419253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.430253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.430277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.444668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.444709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.454381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.454408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.469284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.469331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.479130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.479154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.490941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.490978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.505356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.505383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.515019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.515044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.527153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.527178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.539790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.539817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.549746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.549771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.562020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.562045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.576787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.576829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.586368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.586394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.602074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.602100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.611986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.612010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.623638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.623661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.634470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.634496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.222 [2024-11-20 07:31:07.650397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.222 [2024-11-20 07:31:07.650424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.666507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.666533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.681976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.682004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.691950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.691991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.704055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.704080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.715097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.715133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.726515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.726542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.741265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.741316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.750785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.750825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.765285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.765318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.775032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.775068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.789121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.789162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.799070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.799103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.813196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.813221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.823389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.823416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.835442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.835468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.848675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.848701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.858684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.858708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.873874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.873899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.889354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.889383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.481 [2024-11-20 07:31:07.898865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.481 [2024-11-20 07:31:07.898890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.740 [2024-11-20 07:31:07.913698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.740 [2024-11-20 07:31:07.913729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.740 [2024-11-20 07:31:07.928919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.740 [2024-11-20 07:31:07.928945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:07.938169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:07.938195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:07.954322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:07.954364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:07.964697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:07.964723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:07.976994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:07.977034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:07.988472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:07.988514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 11403.80 IOPS, 89.09 MiB/s [2024-11-20T06:31:08.174Z] [2024-11-20 07:31:07.998968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:07.999007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.003937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.003961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 00:29:04.741 Latency(us) 00:29:04.741 [2024-11-20T06:31:08.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.741 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:04.741 Nvme1n1 : 5.01 11407.12 89.12 0.00 0.00 11206.98 2985.53 20291.89 00:29:04.741 [2024-11-20T06:31:08.174Z] =================================================================================================================== 00:29:04.741 [2024-11-20T06:31:08.174Z] Total : 11407.12 89.12 0.00 0.00 11206.98 2985.53 20291.89 00:29:04.741 [2024-11-20 07:31:08.011914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.011937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.019913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.019936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.027933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.027961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.035973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.036016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.043965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.044007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.051960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.052003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.059971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.060015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.067963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.068021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.075960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.076001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.083964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.084004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.091964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.092004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.099966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.100008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.107971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.108016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.115962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.116003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.123962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.124004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.131961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.132002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.139918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.139943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.147909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.147928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.155912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.155932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.741 [2024-11-20 07:31:08.163907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.741 [2024-11-20 07:31:08.163926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 [2024-11-20 07:31:08.171930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.000 [2024-11-20 07:31:08.171956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 [2024-11-20 07:31:08.179960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.000 [2024-11-20 07:31:08.180000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 [2024-11-20 07:31:08.187976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.000 [2024-11-20 07:31:08.188016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 [2024-11-20 07:31:08.195909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.000 [2024-11-20 07:31:08.195928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 [2024-11-20 07:31:08.203912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.000 [2024-11-20 07:31:08.203932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 [2024-11-20 07:31:08.211911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.000 [2024-11-20 07:31:08.211930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2652356) - No such process 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2652356 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:05.000 delay0 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.000 07:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:05.000 [2024-11-20 07:31:08.289794] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:13.110 Initializing NVMe Controllers 00:29:13.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.110 Initialization complete. Launching workers. 00:29:13.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 231, failed: 23278 00:29:13.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23386, failed to submit 123 00:29:13.110 success 23315, unsuccessful 71, failed 0 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.110 rmmod nvme_tcp 00:29:13.110 rmmod nvme_fabrics 00:29:13.110 rmmod nvme_keyring 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2651036 ']' 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2651036 00:29:13.110 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2651036 ']' 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2651036 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2651036 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2651036' 00:29:13.111 killing process with pid 2651036 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2651036 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2651036 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.111 07:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.487 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.487 00:29:14.487 real 0m28.592s 00:29:14.487 user 0m40.530s 00:29:14.487 sys 0m10.040s 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:14.488 ************************************ 00:29:14.488 END TEST nvmf_zcopy 00:29:14.488 ************************************ 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:14.488 ************************************ 00:29:14.488 START TEST nvmf_nmic 00:29:14.488 ************************************ 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:14.488 * Looking for test storage... 00:29:14.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:29:14.488 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:14.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.747 --rc genhtml_branch_coverage=1 00:29:14.747 --rc genhtml_function_coverage=1 00:29:14.747 --rc genhtml_legend=1 00:29:14.747 --rc geninfo_all_blocks=1 00:29:14.747 --rc geninfo_unexecuted_blocks=1 00:29:14.747 00:29:14.747 ' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:14.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.747 --rc genhtml_branch_coverage=1 00:29:14.747 --rc genhtml_function_coverage=1 00:29:14.747 --rc genhtml_legend=1 00:29:14.747 --rc geninfo_all_blocks=1 00:29:14.747 --rc geninfo_unexecuted_blocks=1 00:29:14.747 00:29:14.747 ' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:14.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.747 --rc genhtml_branch_coverage=1 00:29:14.747 --rc genhtml_function_coverage=1 00:29:14.747 --rc genhtml_legend=1 00:29:14.747 --rc geninfo_all_blocks=1 00:29:14.747 --rc geninfo_unexecuted_blocks=1 00:29:14.747 00:29:14.747 ' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:14.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.747 --rc genhtml_branch_coverage=1 00:29:14.747 --rc genhtml_function_coverage=1 00:29:14.747 --rc genhtml_legend=1 00:29:14.747 --rc geninfo_all_blocks=1 00:29:14.747 --rc geninfo_unexecuted_blocks=1 00:29:14.747 00:29:14.747 ' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.747 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.748 07:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.748 07:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:16.701 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:16.701 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.701 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:16.702 Found net devices under 0000:09:00.0: cvl_0_0 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:16.702 Found net devices under 0000:09:00.1: cvl_0_1 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.702 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:29:16.961 00:29:16.961 --- 10.0.0.2 ping statistics --- 00:29:16.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.961 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:29:16.961 00:29:16.961 --- 10.0.0.1 ping statistics --- 00:29:16.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.961 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2655754 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2655754 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2655754 ']' 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.961 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:16.961 [2024-11-20 07:31:20.275819] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:16.961 [2024-11-20 07:31:20.276926] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:29:16.961 [2024-11-20 07:31:20.276993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.961 [2024-11-20 07:31:20.352668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.220 [2024-11-20 07:31:20.418116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.220 [2024-11-20 07:31:20.418178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.220 [2024-11-20 07:31:20.418191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.220 [2024-11-20 07:31:20.418201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.220 [2024-11-20 07:31:20.418225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.220 [2024-11-20 07:31:20.419858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.220 [2024-11-20 07:31:20.419921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.220 [2024-11-20 07:31:20.419968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.220 [2024-11-20 07:31:20.419971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.220 [2024-11-20 07:31:20.523267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:17.220 [2024-11-20 07:31:20.523539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:17.220 [2024-11-20 07:31:20.524199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.220 [2024-11-20 07:31:20.524478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:17.220 [2024-11-20 07:31:20.525562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.220 [2024-11-20 07:31:20.576623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.220 Malloc0 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.220 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.478 [2024-11-20 07:31:20.652821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:17.478 test case1: single bdev can't be used in multiple subsystems 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:17.478 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.479 [2024-11-20 07:31:20.676540] bdev.c:8462:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:17.479 [2024-11-20 07:31:20.676571] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:17.479 [2024-11-20 07:31:20.676586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:17.479 request: 00:29:17.479 { 00:29:17.479 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:17.479 "namespace": { 00:29:17.479 "bdev_name": "Malloc0", 00:29:17.479 "no_auto_visible": false 00:29:17.479 }, 00:29:17.479 "method": "nvmf_subsystem_add_ns", 00:29:17.479 "req_id": 1 00:29:17.479 } 00:29:17.479 Got JSON-RPC error response 00:29:17.479 response: 00:29:17.479 { 00:29:17.479 "code": -32602, 00:29:17.479 "message": "Invalid parameters" 00:29:17.479 } 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:17.479 Adding namespace failed - expected result. 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:17.479 test case2: host connect to nvmf target in multiple paths 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.479 [2024-11-20 07:31:20.684654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.479 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:17.737 07:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:17.737 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:17.737 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:29:17.737 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:17.737 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:29:17.737 07:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:29:20.344 07:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:20.344 [global] 00:29:20.344 thread=1 00:29:20.344 invalidate=1 00:29:20.344 rw=write 00:29:20.344 time_based=1 00:29:20.344 runtime=1 00:29:20.344 ioengine=libaio 00:29:20.344 direct=1 00:29:20.344 bs=4096 00:29:20.344 iodepth=1 00:29:20.344 norandommap=0 00:29:20.344 numjobs=1 00:29:20.344 00:29:20.344 verify_dump=1 00:29:20.344 verify_backlog=512 00:29:20.344 verify_state_save=0 00:29:20.344 do_verify=1 00:29:20.344 verify=crc32c-intel 00:29:20.344 [job0] 00:29:20.344 filename=/dev/nvme0n1 00:29:20.344 Could not set queue depth (nvme0n1) 00:29:20.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:20.344 fio-3.35 00:29:20.344 Starting 1 thread 00:29:21.303 00:29:21.303 job0: (groupid=0, jobs=1): err= 0: pid=2656245: Wed Nov 20 07:31:24 2024 00:29:21.303 read: IOPS=1241, BW=4967KiB/s (5087kB/s)(5012KiB/1009msec) 00:29:21.303 slat (nsec): min=4692, max=48484, avg=11091.44, stdev=6914.00 00:29:21.303 clat (usec): min=193, max=42047, avg=586.35, stdev=3692.70 00:29:21.303 lat (usec): min=201, max=42065, avg=597.44, stdev=3693.94 00:29:21.303 clat percentiles (usec): 00:29:21.303 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:29:21.303 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 235], 00:29:21.303 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 400], 95.00th=[ 445], 00:29:21.303 | 99.00th=[ 611], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:29:21.303 | 99.99th=[42206] 00:29:21.303 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:29:21.303 slat (nsec): min=5516, max=47697, avg=11839.61, stdev=4445.89 00:29:21.303 clat (usec): min=134, max=341, avg=151.79, stdev=12.09 00:29:21.303 lat (usec): min=140, max=352, avg=163.63, stdev=13.88 00:29:21.303 clat percentiles (usec): 00:29:21.303 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:29:21.303 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:29:21.303 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 161], 95.00th=[ 167], 00:29:21.303 | 99.00th=[ 202], 99.50th=[ 235], 99.90th=[ 281], 99.95th=[ 343], 00:29:21.303 | 99.99th=[ 343] 00:29:21.303 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:29:21.303 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:29:21.303 lat (usec) : 250=84.26%, 500=14.70%, 750=0.68% 00:29:21.303 lat (msec) : 50=0.36% 00:29:21.303 cpu : usr=1.98%, sys=3.08%, ctx=2792, majf=0, minf=1 00:29:21.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.303 issued rwts: total=1253,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:21.303 00:29:21.303 Run status group 0 (all jobs): 00:29:21.303 READ: bw=4967KiB/s (5087kB/s), 4967KiB/s-4967KiB/s (5087kB/s-5087kB/s), io=5012KiB (5132kB), run=1009-1009msec 00:29:21.303 WRITE: bw=6089KiB/s (6235kB/s), 6089KiB/s-6089KiB/s (6235kB/s-6235kB/s), io=6144KiB (6291kB), run=1009-1009msec 00:29:21.303 00:29:21.303 Disk stats (read/write): 00:29:21.303 nvme0n1: ios=1277/1536, merge=0/0, ticks=1589/242, in_queue=1831, util=98.70% 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:21.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:21.303 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.304 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.304 rmmod nvme_tcp 00:29:21.304 rmmod nvme_fabrics 00:29:21.304 rmmod nvme_keyring 00:29:21.304 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2655754 ']' 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2655754 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2655754 ']' 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2655754 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2655754 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2655754' 00:29:21.562 killing process with pid 2655754 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2655754 00:29:21.562 07:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2655754 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.822 07:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.728 00:29:23.728 real 0m9.233s 00:29:23.728 user 0m17.264s 00:29:23.728 sys 0m3.458s 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:23.728 ************************************ 00:29:23.728 END TEST nvmf_nmic 00:29:23.728 ************************************ 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.728 ************************************ 00:29:23.728 START TEST nvmf_fio_target 00:29:23.728 ************************************ 00:29:23.728 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:23.728 * Looking for test storage... 00:29:23.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.987 --rc genhtml_branch_coverage=1 00:29:23.987 --rc genhtml_function_coverage=1 00:29:23.987 --rc genhtml_legend=1 00:29:23.987 --rc geninfo_all_blocks=1 00:29:23.987 --rc geninfo_unexecuted_blocks=1 00:29:23.987 00:29:23.987 ' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.987 --rc genhtml_branch_coverage=1 00:29:23.987 --rc genhtml_function_coverage=1 00:29:23.987 --rc genhtml_legend=1 00:29:23.987 --rc geninfo_all_blocks=1 00:29:23.987 --rc geninfo_unexecuted_blocks=1 00:29:23.987 00:29:23.987 ' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.987 --rc genhtml_branch_coverage=1 00:29:23.987 --rc genhtml_function_coverage=1 00:29:23.987 --rc genhtml_legend=1 00:29:23.987 --rc geninfo_all_blocks=1 00:29:23.987 --rc geninfo_unexecuted_blocks=1 00:29:23.987 00:29:23.987 ' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:23.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.987 --rc genhtml_branch_coverage=1 00:29:23.987 --rc genhtml_function_coverage=1 00:29:23.987 --rc genhtml_legend=1 00:29:23.987 --rc geninfo_all_blocks=1 00:29:23.987 --rc geninfo_unexecuted_blocks=1 00:29:23.987 00:29:23.987 ' 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.987 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.988 07:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.525 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:26.526 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:26.526 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:26.526 Found net devices under 0000:09:00.0: cvl_0_0 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:26.526 Found net devices under 0000:09:00.1: cvl_0_1 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.526 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:29:26.527 00:29:26.527 --- 10.0.0.2 ping statistics --- 00:29:26.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.527 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:29:26.527 00:29:26.527 --- 10.0.0.1 ping statistics --- 00:29:26.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.527 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2658440 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2658440 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2658440 ']' 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.527 [2024-11-20 07:31:29.632783] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:26.527 [2024-11-20 07:31:29.633823] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:29:26.527 [2024-11-20 07:31:29.633872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.527 [2024-11-20 07:31:29.702498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.527 [2024-11-20 07:31:29.757128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.527 [2024-11-20 07:31:29.757175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.527 [2024-11-20 07:31:29.757197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.527 [2024-11-20 07:31:29.757207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.527 [2024-11-20 07:31:29.757217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.527 [2024-11-20 07:31:29.758767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.527 [2024-11-20 07:31:29.758822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.527 [2024-11-20 07:31:29.758891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.527 [2024-11-20 07:31:29.758894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.527 [2024-11-20 07:31:29.845397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:26.527 [2024-11-20 07:31:29.845598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:26.527 [2024-11-20 07:31:29.845910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:26.527 [2024-11-20 07:31:29.846561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:26.527 [2024-11-20 07:31:29.846800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.527 07:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:26.785 [2024-11-20 07:31:30.179582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.043 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:27.301 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:27.301 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:27.558 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:27.558 07:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:27.816 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:27.816 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.074 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:28.074 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:28.332 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.590 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:28.590 07:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.848 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:28.848 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:29.414 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:29.414 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:29.414 07:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:29.672 07:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:29.672 07:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.930 07:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:29.930 07:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:30.496 07:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.496 [2024-11-20 07:31:33.879748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.496 07:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:30.753 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:29:31.317 07:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:29:33.215 07:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:33.473 [global] 00:29:33.473 thread=1 00:29:33.473 invalidate=1 00:29:33.473 rw=write 00:29:33.473 time_based=1 00:29:33.473 runtime=1 00:29:33.473 ioengine=libaio 00:29:33.473 direct=1 00:29:33.473 bs=4096 00:29:33.473 iodepth=1 00:29:33.473 norandommap=0 00:29:33.473 numjobs=1 00:29:33.473 00:29:33.473 verify_dump=1 00:29:33.473 verify_backlog=512 00:29:33.473 verify_state_save=0 00:29:33.473 do_verify=1 00:29:33.473 verify=crc32c-intel 00:29:33.473 [job0] 00:29:33.473 filename=/dev/nvme0n1 00:29:33.473 [job1] 00:29:33.473 filename=/dev/nvme0n2 00:29:33.473 [job2] 00:29:33.473 filename=/dev/nvme0n3 00:29:33.473 [job3] 00:29:33.473 filename=/dev/nvme0n4 00:29:33.473 Could not set queue depth (nvme0n1) 00:29:33.473 Could not set queue depth (nvme0n2) 00:29:33.473 Could not set queue depth (nvme0n3) 00:29:33.473 Could not set queue depth (nvme0n4) 00:29:33.473 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.473 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.473 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.473 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.473 fio-3.35 00:29:33.473 Starting 4 threads 00:29:34.847 00:29:34.847 job0: (groupid=0, jobs=1): err= 0: pid=2659382: Wed Nov 20 07:31:38 2024 00:29:34.847 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:29:34.847 slat (nsec): min=7361, max=36359, avg=15267.00, stdev=7716.88 00:29:34.847 clat (usec): min=40888, max=41059, avg=40976.47, stdev=31.47 00:29:34.847 lat (usec): min=40896, max=41071, avg=40991.74, stdev=30.09 00:29:34.847 clat percentiles (usec): 00:29:34.847 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:34.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:34.847 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:34.847 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:34.847 | 99.99th=[41157] 00:29:34.847 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:29:34.847 slat (nsec): min=7425, max=32290, avg=8921.78, stdev=2288.59 00:29:34.847 clat (usec): min=150, max=1839, avg=232.64, stdev=97.05 00:29:34.847 lat (usec): min=158, max=1851, avg=241.56, stdev=97.66 00:29:34.847 clat percentiles (usec): 00:29:34.847 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 190], 00:29:34.847 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:29:34.847 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 375], 00:29:34.847 | 99.00th=[ 412], 99.50th=[ 701], 99.90th=[ 1844], 99.95th=[ 1844], 00:29:34.847 | 99.99th=[ 1844] 00:29:34.847 bw ( KiB/s): min= 4096, max= 4096, per=20.54%, avg=4096.00, stdev= 0.00, samples=1 00:29:34.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:34.847 lat (usec) : 250=80.34%, 500=14.61%, 750=0.56%, 1000=0.19% 00:29:34.847 lat (msec) : 2=0.19%, 50=4.12% 00:29:34.847 cpu : usr=0.29%, sys=0.68%, ctx=534, majf=0, minf=1 00:29:34.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:34.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.847 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:34.847 job1: (groupid=0, jobs=1): err= 0: pid=2659383: Wed Nov 20 07:31:38 2024 00:29:34.847 read: IOPS=1961, BW=7844KiB/s (8032kB/s)(7852KiB/1001msec) 00:29:34.847 slat (nsec): min=4198, max=44345, avg=8986.51, stdev=4386.95 00:29:34.847 clat (usec): min=180, max=668, avg=299.39, stdev=105.52 00:29:34.847 lat (usec): min=185, max=704, avg=308.38, stdev=108.03 00:29:34.847 clat percentiles (usec): 00:29:34.847 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:29:34.847 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:29:34.847 | 70.00th=[ 265], 80.00th=[ 445], 90.00th=[ 494], 95.00th=[ 506], 00:29:34.847 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 668], 00:29:34.847 | 99.99th=[ 668] 00:29:34.847 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:34.847 slat (nsec): min=5473, max=34101, avg=7365.46, stdev=2793.49 00:29:34.847 clat (usec): min=129, max=1964, avg=180.15, stdev=65.11 00:29:34.847 lat (usec): min=135, max=1975, avg=187.52, stdev=65.63 00:29:34.847 clat percentiles (usec): 00:29:34.847 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:29:34.847 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 178], 00:29:34.847 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 227], 00:29:34.847 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 1004], 99.95th=[ 1500], 00:29:34.847 | 99.99th=[ 1958] 00:29:34.847 bw ( KiB/s): min= 8192, max= 8192, per=41.08%, avg=8192.00, stdev= 0.00, samples=1 00:29:34.847 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:34.847 lat (usec) : 250=76.61%, 500=19.70%, 750=3.62% 00:29:34.847 lat (msec) : 2=0.07% 00:29:34.847 cpu : usr=2.40%, sys=2.70%, ctx=4013, majf=0, minf=1 00:29:34.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:34.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.847 issued rwts: total=1963,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:34.847 job2: (groupid=0, jobs=1): err= 0: pid=2659388: Wed Nov 20 07:31:38 2024 00:29:34.847 read: IOPS=1929, BW=7716KiB/s (7901kB/s)(7724KiB/1001msec) 00:29:34.847 slat (nsec): min=5753, max=50024, avg=9490.96, stdev=5412.88 00:29:34.847 clat (usec): min=208, max=618, avg=277.31, stdev=43.18 00:29:34.847 lat (usec): min=215, max=625, avg=286.80, stdev=45.40 00:29:34.847 clat percentiles (usec): 00:29:34.847 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:29:34.847 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:29:34.847 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 334], 00:29:34.847 | 99.00th=[ 506], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 619], 00:29:34.847 | 99.99th=[ 619] 00:29:34.848 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:34.848 slat (usec): min=7, max=768, avg= 9.68, stdev=16.99 00:29:34.848 clat (usec): min=160, max=3572, avg=202.98, stdev=83.45 00:29:34.848 lat (usec): min=168, max=3580, avg=212.66, stdev=85.23 00:29:34.848 clat percentiles (usec): 00:29:34.848 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:29:34.848 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:29:34.848 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 235], 00:29:34.848 | 99.00th=[ 269], 99.50th=[ 310], 99.90th=[ 1074], 99.95th=[ 1090], 00:29:34.848 | 99.99th=[ 3589] 00:29:34.848 bw ( KiB/s): min= 8192, max= 8192, per=41.08%, avg=8192.00, stdev= 0.00, samples=1 00:29:34.848 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:34.848 lat (usec) : 250=58.21%, 500=41.17%, 750=0.53%, 1000=0.03% 00:29:34.848 lat (msec) : 2=0.05%, 4=0.03% 00:29:34.848 cpu : usr=2.00%, sys=5.80%, ctx=3982, majf=0, minf=1 00:29:34.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:34.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.848 issued rwts: total=1931,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:34.848 job3: (groupid=0, jobs=1): err= 0: pid=2659389: Wed Nov 20 07:31:38 2024 00:29:34.848 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:29:34.848 slat (nsec): min=7515, max=34771, avg=15378.77, stdev=6419.02 00:29:34.848 clat (usec): min=19691, max=41999, avg=40102.11, stdev=4568.37 00:29:34.848 lat (usec): min=19725, max=42013, avg=40117.49, stdev=4564.03 00:29:34.848 clat percentiles (usec): 00:29:34.848 | 1.00th=[19792], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:29:34.848 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:34.848 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:29:34.848 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:34.848 | 99.99th=[42206] 00:29:34.848 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:29:34.848 slat (nsec): min=6067, max=29599, avg=9006.26, stdev=2144.09 00:29:34.848 clat (usec): min=155, max=2881, avg=221.06, stdev=122.12 00:29:34.848 lat (usec): min=163, max=2888, avg=230.06, stdev=122.23 00:29:34.848 clat percentiles (usec): 00:29:34.848 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 196], 00:29:34.848 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 219], 00:29:34.848 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 253], 00:29:34.848 | 99.00th=[ 273], 99.50th=[ 359], 99.90th=[ 2868], 99.95th=[ 2868], 00:29:34.848 | 99.99th=[ 2868] 00:29:34.848 bw ( KiB/s): min= 4096, max= 4096, per=20.54%, avg=4096.00, stdev= 0.00, samples=1 00:29:34.848 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:34.848 lat (usec) : 250=87.45%, 500=8.05%, 750=0.19% 00:29:34.848 lat (msec) : 4=0.19%, 20=0.19%, 50=3.93% 00:29:34.848 cpu : usr=0.30%, sys=0.50%, ctx=534, majf=0, minf=2 00:29:34.848 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:34.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.848 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.848 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:34.848 00:29:34.848 Run status group 0 (all jobs): 00:29:34.848 READ: bw=15.0MiB/s (15.7MB/s), 85.7KiB/s-7844KiB/s (87.7kB/s-8032kB/s), io=15.4MiB (16.1MB), run=1001-1027msec 00:29:34.848 WRITE: bw=19.5MiB/s (20.4MB/s), 1994KiB/s-8184KiB/s (2042kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1027msec 00:29:34.848 00:29:34.848 Disk stats (read/write): 00:29:34.848 nvme0n1: ios=67/512, merge=0/0, ticks=723/114, in_queue=837, util=86.37% 00:29:34.848 nvme0n2: ios=1587/1788, merge=0/0, ticks=787/319, in_queue=1106, util=99.19% 00:29:34.848 nvme0n3: ios=1600/1930, merge=0/0, ticks=1239/373, in_queue=1612, util=97.80% 00:29:34.848 nvme0n4: ios=71/512, merge=0/0, ticks=822/110, in_queue=932, util=99.68% 00:29:34.848 07:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:34.848 [global] 00:29:34.848 thread=1 00:29:34.848 invalidate=1 00:29:34.848 rw=randwrite 00:29:34.848 time_based=1 00:29:34.848 runtime=1 00:29:34.848 ioengine=libaio 00:29:34.848 direct=1 00:29:34.848 bs=4096 00:29:34.848 iodepth=1 00:29:34.848 norandommap=0 00:29:34.848 numjobs=1 00:29:34.848 00:29:34.848 verify_dump=1 00:29:34.848 verify_backlog=512 00:29:34.848 verify_state_save=0 00:29:34.848 do_verify=1 00:29:34.848 verify=crc32c-intel 00:29:34.848 [job0] 00:29:34.848 filename=/dev/nvme0n1 00:29:34.848 [job1] 00:29:34.848 filename=/dev/nvme0n2 00:29:34.848 [job2] 00:29:34.848 filename=/dev/nvme0n3 00:29:34.848 [job3] 00:29:34.848 filename=/dev/nvme0n4 00:29:34.848 Could not set queue depth (nvme0n1) 00:29:34.848 Could not set queue depth (nvme0n2) 00:29:34.848 Could not set queue depth (nvme0n3) 00:29:34.848 Could not set queue depth (nvme0n4) 00:29:35.106 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.106 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.106 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.106 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.106 fio-3.35 00:29:35.106 Starting 4 threads 00:29:36.477 00:29:36.477 job0: (groupid=0, jobs=1): err= 0: pid=2659620: Wed Nov 20 07:31:39 2024 00:29:36.477 read: IOPS=2298, BW=9195KiB/s (9415kB/s)(9204KiB/1001msec) 00:29:36.477 slat (nsec): min=4066, max=36315, avg=7282.48, stdev=4409.27 00:29:36.477 clat (usec): min=179, max=619, avg=224.95, stdev=52.68 00:29:36.477 lat (usec): min=184, max=635, avg=232.23, stdev=54.50 00:29:36.477 clat percentiles (usec): 00:29:36.477 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:29:36.477 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 219], 00:29:36.477 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 297], 00:29:36.478 | 99.00th=[ 482], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 603], 00:29:36.478 | 99.99th=[ 619] 00:29:36.478 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:29:36.478 slat (nsec): min=5185, max=36684, avg=8348.06, stdev=4107.63 00:29:36.478 clat (usec): min=130, max=914, avg=168.95, stdev=50.98 00:29:36.478 lat (usec): min=136, max=921, avg=177.29, stdev=51.74 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:29:36.478 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 155], 00:29:36.478 | 70.00th=[ 165], 80.00th=[ 202], 90.00th=[ 247], 95.00th=[ 255], 00:29:36.478 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 734], 99.95th=[ 758], 00:29:36.478 | 99.99th=[ 914] 00:29:36.478 bw ( KiB/s): min= 8992, max= 8992, per=50.25%, avg=8992.00, stdev= 0.00, samples=1 00:29:36.478 iops : min= 2248, max= 2248, avg=2248.00, stdev= 0.00, samples=1 00:29:36.478 lat (usec) : 250=87.16%, 500=12.36%, 750=0.43%, 1000=0.04% 00:29:36.478 cpu : usr=2.50%, sys=3.50%, ctx=4863, majf=0, minf=1 00:29:36.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.478 job1: (groupid=0, jobs=1): err= 0: pid=2659621: Wed Nov 20 07:31:39 2024 00:29:36.478 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:29:36.478 slat (nsec): min=7047, max=14918, avg=13370.87, stdev=1860.88 00:29:36.478 clat (usec): min=227, max=41007, avg=39189.22, stdev=8493.55 00:29:36.478 lat (usec): min=241, max=41020, avg=39202.59, stdev=8493.49 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 229], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:29:36.478 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:36.478 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:36.478 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:36.478 | 99.99th=[41157] 00:29:36.478 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:29:36.478 slat (nsec): min=6263, max=29604, avg=9142.00, stdev=2639.21 00:29:36.478 clat (usec): min=155, max=1971, avg=185.17, stdev=82.18 00:29:36.478 lat (usec): min=163, max=1978, avg=194.31, stdev=82.18 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 167], 00:29:36.478 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:29:36.478 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 229], 95.00th=[ 243], 00:29:36.478 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 1975], 99.95th=[ 1975], 00:29:36.478 | 99.99th=[ 1975] 00:29:36.478 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:29:36.478 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:36.478 lat (usec) : 250=95.14%, 500=0.56% 00:29:36.478 lat (msec) : 2=0.19%, 50=4.11% 00:29:36.478 cpu : usr=0.00%, sys=0.70%, ctx=538, majf=0, minf=1 00:29:36.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.478 job2: (groupid=0, jobs=1): err= 0: pid=2659622: Wed Nov 20 07:31:39 2024 00:29:36.478 read: IOPS=527, BW=2109KiB/s (2159kB/s)(2172KiB/1030msec) 00:29:36.478 slat (nsec): min=6031, max=63438, avg=14002.35, stdev=5853.20 00:29:36.478 clat (usec): min=232, max=41014, avg=1407.71, stdev=6670.30 00:29:36.478 lat (usec): min=239, max=41031, avg=1421.71, stdev=6670.09 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 273], 00:29:36.478 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:29:36.478 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:29:36.478 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:36.478 | 99.99th=[41157] 00:29:36.478 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:29:36.478 slat (nsec): min=7610, max=57469, avg=16372.16, stdev=7234.49 00:29:36.478 clat (usec): min=169, max=419, avg=227.70, stdev=37.06 00:29:36.478 lat (usec): min=177, max=435, avg=244.08, stdev=37.31 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 204], 00:29:36.478 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:29:36.478 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 297], 00:29:36.478 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 408], 99.95th=[ 420], 00:29:36.478 | 99.99th=[ 420] 00:29:36.478 bw ( KiB/s): min= 8192, max= 8192, per=45.78%, avg=8192.00, stdev= 0.00, samples=1 00:29:36.478 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:36.478 lat (usec) : 250=55.14%, 500=43.91% 00:29:36.478 lat (msec) : 50=0.96% 00:29:36.478 cpu : usr=1.75%, sys=3.01%, ctx=1568, majf=0, minf=1 00:29:36.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 issued rwts: total=543,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.478 job3: (groupid=0, jobs=1): err= 0: pid=2659623: Wed Nov 20 07:31:39 2024 00:29:36.478 read: IOPS=22, BW=91.8KiB/s (94.0kB/s)(92.0KiB/1002msec) 00:29:36.478 slat (nsec): min=6498, max=33090, avg=13971.78, stdev=4645.12 00:29:36.478 clat (usec): min=236, max=41047, avg=39199.92, stdev=8493.97 00:29:36.478 lat (usec): min=242, max=41060, avg=39213.90, stdev=8495.60 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 237], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:36.478 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:36.478 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:36.478 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:36.478 | 99.99th=[41157] 00:29:36.478 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:29:36.478 slat (nsec): min=5898, max=33053, avg=8332.63, stdev=2680.88 00:29:36.478 clat (usec): min=155, max=346, avg=184.24, stdev=25.72 00:29:36.478 lat (usec): min=164, max=379, avg=192.57, stdev=26.36 00:29:36.478 clat percentiles (usec): 00:29:36.478 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:29:36.478 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:29:36.478 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 235], 95.00th=[ 245], 00:29:36.478 | 99.00th=[ 260], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 347], 00:29:36.478 | 99.99th=[ 347] 00:29:36.478 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:29:36.478 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:36.478 lat (usec) : 250=94.02%, 500=1.87% 00:29:36.478 lat (msec) : 50=4.11% 00:29:36.478 cpu : usr=0.30%, sys=0.30%, ctx=535, majf=0, minf=2 00:29:36.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.478 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.478 00:29:36.478 Run status group 0 (all jobs): 00:29:36.478 READ: bw=11.0MiB/s (11.5MB/s), 91.7KiB/s-9195KiB/s (93.9kB/s-9415kB/s), io=11.3MiB (11.8MB), run=1001-1030msec 00:29:36.478 WRITE: bw=17.5MiB/s (18.3MB/s), 2042KiB/s-9.99MiB/s (2091kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1030msec 00:29:36.478 00:29:36.478 Disk stats (read/write): 00:29:36.478 nvme0n1: ios=2072/2049, merge=0/0, ticks=1410/355, in_queue=1765, util=100.00% 00:29:36.478 nvme0n2: ios=55/512, merge=0/0, ticks=1199/93, in_queue=1292, util=100.00% 00:29:36.478 nvme0n3: ios=585/1024, merge=0/0, ticks=1441/227, in_queue=1668, util=99.79% 00:29:36.478 nvme0n4: ios=19/512, merge=0/0, ticks=738/96, in_queue=834, util=89.73% 00:29:36.478 07:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:36.478 [global] 00:29:36.478 thread=1 00:29:36.478 invalidate=1 00:29:36.478 rw=write 00:29:36.478 time_based=1 00:29:36.478 runtime=1 00:29:36.478 ioengine=libaio 00:29:36.478 direct=1 00:29:36.478 bs=4096 00:29:36.478 iodepth=128 00:29:36.478 norandommap=0 00:29:36.478 numjobs=1 00:29:36.478 00:29:36.478 verify_dump=1 00:29:36.478 verify_backlog=512 00:29:36.478 verify_state_save=0 00:29:36.478 do_verify=1 00:29:36.478 verify=crc32c-intel 00:29:36.478 [job0] 00:29:36.478 filename=/dev/nvme0n1 00:29:36.478 [job1] 00:29:36.478 filename=/dev/nvme0n2 00:29:36.478 [job2] 00:29:36.478 filename=/dev/nvme0n3 00:29:36.478 [job3] 00:29:36.478 filename=/dev/nvme0n4 00:29:36.478 Could not set queue depth (nvme0n1) 00:29:36.478 Could not set queue depth (nvme0n2) 00:29:36.478 Could not set queue depth (nvme0n3) 00:29:36.478 Could not set queue depth (nvme0n4) 00:29:36.478 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.478 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.478 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.478 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.478 fio-3.35 00:29:36.478 Starting 4 threads 00:29:37.853 00:29:37.853 job0: (groupid=0, jobs=1): err= 0: pid=2659966: Wed Nov 20 07:31:41 2024 00:29:37.853 read: IOPS=3134, BW=12.2MiB/s (12.8MB/s)(12.4MiB/1010msec) 00:29:37.853 slat (usec): min=2, max=20715, avg=154.28, stdev=1137.55 00:29:37.853 clat (usec): min=1601, max=69877, avg=19436.53, stdev=9969.72 00:29:37.853 lat (usec): min=1617, max=69900, avg=19590.81, stdev=10066.29 00:29:37.853 clat percentiles (usec): 00:29:37.853 | 1.00th=[ 1975], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[11863], 00:29:37.853 | 30.00th=[12256], 40.00th=[13435], 50.00th=[17695], 60.00th=[21627], 00:29:37.853 | 70.00th=[22414], 80.00th=[25822], 90.00th=[30802], 95.00th=[42206], 00:29:37.853 | 99.00th=[50070], 99.50th=[57410], 99.90th=[69731], 99.95th=[69731], 00:29:37.853 | 99.99th=[69731] 00:29:37.853 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:29:37.853 slat (usec): min=3, max=25443, avg=128.35, stdev=878.38 00:29:37.853 clat (usec): min=3520, max=75580, avg=18603.94, stdev=10516.83 00:29:37.853 lat (usec): min=3529, max=75588, avg=18732.29, stdev=10570.48 00:29:37.853 clat percentiles (usec): 00:29:37.853 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11207], 00:29:37.853 | 30.00th=[11731], 40.00th=[14091], 50.00th=[15270], 60.00th=[19006], 00:29:37.853 | 70.00th=[22152], 80.00th=[23462], 90.00th=[27919], 95.00th=[38011], 00:29:37.853 | 99.00th=[65799], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:29:37.853 | 99.99th=[76022] 00:29:37.853 bw ( KiB/s): min=13528, max=14909, per=21.64%, avg=14218.50, stdev=976.51, samples=2 00:29:37.853 iops : min= 3382, max= 3727, avg=3554.50, stdev=243.95, samples=2 00:29:37.853 lat (msec) : 2=0.53%, 4=0.09%, 10=10.46%, 20=47.75%, 50=39.53% 00:29:37.853 lat (msec) : 100=1.64% 00:29:37.853 cpu : usr=3.37%, sys=6.94%, ctx=266, majf=0, minf=2 00:29:37.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:29:37.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:37.853 issued rwts: total=3166,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:37.853 job1: (groupid=0, jobs=1): err= 0: pid=2659967: Wed Nov 20 07:31:41 2024 00:29:37.853 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:29:37.853 slat (usec): min=2, max=6918, avg=91.55, stdev=495.16 00:29:37.853 clat (usec): min=7924, max=28380, avg=12075.78, stdev=2846.35 00:29:37.853 lat (usec): min=8243, max=28419, avg=12167.34, stdev=2883.67 00:29:37.853 clat percentiles (usec): 00:29:37.854 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:29:37.854 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:29:37.854 | 70.00th=[11863], 80.00th=[12518], 90.00th=[16057], 95.00th=[19006], 00:29:37.854 | 99.00th=[21890], 99.50th=[23987], 99.90th=[24249], 99.95th=[27657], 00:29:37.854 | 99.99th=[28443] 00:29:37.854 write: IOPS=5291, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1002msec); 0 zone resets 00:29:37.854 slat (usec): min=3, max=8942, avg=90.13, stdev=487.54 00:29:37.854 clat (usec): min=418, max=26981, avg=12234.94, stdev=2789.96 00:29:37.854 lat (usec): min=3313, max=27019, avg=12325.08, stdev=2824.22 00:29:37.854 clat percentiles (usec): 00:29:37.854 | 1.00th=[ 6849], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[10814], 00:29:37.854 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:29:37.854 | 70.00th=[12125], 80.00th=[14222], 90.00th=[16319], 95.00th=[18220], 00:29:37.854 | 99.00th=[21365], 99.50th=[21627], 99.90th=[24249], 99.95th=[25035], 00:29:37.854 | 99.99th=[26870] 00:29:37.854 bw ( KiB/s): min=20128, max=21264, per=31.49%, avg=20696.00, stdev=803.27, samples=2 00:29:37.854 iops : min= 5032, max= 5316, avg=5174.00, stdev=200.82, samples=2 00:29:37.854 lat (usec) : 500=0.01% 00:29:37.854 lat (msec) : 4=0.40%, 10=9.62%, 20=86.57%, 50=3.40% 00:29:37.854 cpu : usr=7.59%, sys=9.89%, ctx=533, majf=0, minf=1 00:29:37.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:37.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:37.854 issued rwts: total=5120,5302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:37.854 job2: (groupid=0, jobs=1): err= 0: pid=2659968: Wed Nov 20 07:31:41 2024 00:29:37.854 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:29:37.854 slat (usec): min=2, max=28217, avg=156.76, stdev=1302.33 00:29:37.854 clat (usec): min=4774, max=57349, avg=20341.34, stdev=8034.93 00:29:37.854 lat (usec): min=4782, max=57354, avg=20498.09, stdev=8129.76 00:29:37.854 clat percentiles (usec): 00:29:37.854 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[11994], 20.00th=[13173], 00:29:37.854 | 30.00th=[13435], 40.00th=[17171], 50.00th=[19530], 60.00th=[21627], 00:29:37.854 | 70.00th=[23987], 80.00th=[27132], 90.00th=[30278], 95.00th=[32113], 00:29:37.854 | 99.00th=[47449], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:29:37.854 | 99.99th=[57410] 00:29:37.854 write: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(12.6MiB/1017msec); 0 zone resets 00:29:37.854 slat (usec): min=3, max=15951, avg=149.79, stdev=940.24 00:29:37.854 clat (usec): min=3416, max=98847, avg=20708.28, stdev=14614.99 00:29:37.854 lat (usec): min=3435, max=98864, avg=20858.07, stdev=14704.25 00:29:37.854 clat percentiles (usec): 00:29:37.854 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[11863], 20.00th=[12649], 00:29:37.854 | 30.00th=[13042], 40.00th=[14091], 50.00th=[16450], 60.00th=[19268], 00:29:37.854 | 70.00th=[23200], 80.00th=[23725], 90.00th=[29754], 95.00th=[44303], 00:29:37.854 | 99.00th=[92799], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:29:37.854 | 99.99th=[99091] 00:29:37.854 bw ( KiB/s): min= 8312, max=16384, per=18.79%, avg=12348.00, stdev=5707.77, samples=2 00:29:37.854 iops : min= 2078, max= 4096, avg=3087.00, stdev=1426.94, samples=2 00:29:37.854 lat (msec) : 4=0.10%, 10=6.70%, 20=50.16%, 50=40.28%, 100=2.77% 00:29:37.854 cpu : usr=4.43%, sys=6.40%, ctx=273, majf=0, minf=1 00:29:37.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:37.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:37.854 issued rwts: total=3072,3214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:37.854 job3: (groupid=0, jobs=1): err= 0: pid=2659969: Wed Nov 20 07:31:41 2024 00:29:37.854 read: IOPS=4423, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1001msec) 00:29:37.854 slat (usec): min=3, max=3271, avg=101.80, stdev=458.99 00:29:37.854 clat (usec): min=477, max=16689, avg=13322.02, stdev=1466.53 00:29:37.854 lat (usec): min=3341, max=17926, avg=13423.82, stdev=1440.65 00:29:37.854 clat percentiles (usec): 00:29:37.854 | 1.00th=[ 6521], 5.00th=[11338], 10.00th=[11994], 20.00th=[12780], 00:29:37.854 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:29:37.854 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[15139], 00:29:37.854 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16450], 99.95th=[16712], 00:29:37.854 | 99.99th=[16712] 00:29:37.854 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:29:37.854 slat (usec): min=4, max=27200, avg=108.34, stdev=647.70 00:29:37.854 clat (usec): min=9534, max=52839, avg=14546.64, stdev=6185.08 00:29:37.854 lat (usec): min=10119, max=52880, avg=14654.99, stdev=6206.97 00:29:37.854 clat percentiles (usec): 00:29:37.854 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11600], 20.00th=[12387], 00:29:37.854 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:29:37.854 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15139], 95.00th=[27657], 00:29:37.854 | 99.00th=[47973], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:29:37.854 | 99.99th=[52691] 00:29:37.854 bw ( KiB/s): min=16896, max=19968, per=28.05%, avg=18432.00, stdev=2172.23, samples=2 00:29:37.854 iops : min= 4224, max= 4992, avg=4608.00, stdev=543.06, samples=2 00:29:37.854 lat (usec) : 500=0.01% 00:29:37.854 lat (msec) : 4=0.35%, 10=0.76%, 20=96.05%, 50=2.48%, 100=0.34% 00:29:37.854 cpu : usr=6.10%, sys=10.60%, ctx=481, majf=0, minf=1 00:29:37.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:29:37.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:37.854 issued rwts: total=4428,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:37.854 00:29:37.854 Run status group 0 (all jobs): 00:29:37.854 READ: bw=60.6MiB/s (63.6MB/s), 11.8MiB/s-20.0MiB/s (12.4MB/s-20.9MB/s), io=61.7MiB (64.7MB), run=1001-1017msec 00:29:37.854 WRITE: bw=64.2MiB/s (67.3MB/s), 12.3MiB/s-20.7MiB/s (12.9MB/s-21.7MB/s), io=65.3MiB (68.4MB), run=1001-1017msec 00:29:37.854 00:29:37.854 Disk stats (read/write): 00:29:37.854 nvme0n1: ios=2862/3072, merge=0/0, ticks=46562/45282, in_queue=91844, util=86.97% 00:29:37.854 nvme0n2: ios=4130/4608, merge=0/0, ticks=16861/18919, in_queue=35780, util=86.50% 00:29:37.854 nvme0n3: ios=2560/2646, merge=0/0, ticks=52563/45819, in_queue=98382, util=88.95% 00:29:37.854 nvme0n4: ios=3615/4011, merge=0/0, ticks=11848/13343, in_queue=25191, util=99.26% 00:29:37.854 07:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:37.854 [global] 00:29:37.854 thread=1 00:29:37.854 invalidate=1 00:29:37.854 rw=randwrite 00:29:37.854 time_based=1 00:29:37.854 runtime=1 00:29:37.854 ioengine=libaio 00:29:37.854 direct=1 00:29:37.854 bs=4096 00:29:37.854 iodepth=128 00:29:37.854 norandommap=0 00:29:37.854 numjobs=1 00:29:37.854 00:29:37.854 verify_dump=1 00:29:37.854 verify_backlog=512 00:29:37.854 verify_state_save=0 00:29:37.854 do_verify=1 00:29:37.854 verify=crc32c-intel 00:29:37.854 [job0] 00:29:37.854 filename=/dev/nvme0n1 00:29:37.854 [job1] 00:29:37.854 filename=/dev/nvme0n2 00:29:37.854 [job2] 00:29:37.854 filename=/dev/nvme0n3 00:29:37.854 [job3] 00:29:37.854 filename=/dev/nvme0n4 00:29:37.854 Could not set queue depth (nvme0n1) 00:29:37.854 Could not set queue depth (nvme0n2) 00:29:37.854 Could not set queue depth (nvme0n3) 00:29:37.854 Could not set queue depth (nvme0n4) 00:29:37.854 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:37.854 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:37.854 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:37.854 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:37.854 fio-3.35 00:29:37.854 Starting 4 threads 00:29:39.227 00:29:39.227 job0: (groupid=0, jobs=1): err= 0: pid=2660193: Wed Nov 20 07:31:42 2024 00:29:39.227 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:29:39.227 slat (usec): min=3, max=13376, avg=203.69, stdev=1156.35 00:29:39.227 clat (usec): min=10206, max=46281, avg=24923.43, stdev=7583.96 00:29:39.227 lat (usec): min=10211, max=46303, avg=25127.12, stdev=7678.26 00:29:39.227 clat percentiles (usec): 00:29:39.227 | 1.00th=[10683], 5.00th=[14877], 10.00th=[15139], 20.00th=[16057], 00:29:39.227 | 30.00th=[19530], 40.00th=[21890], 50.00th=[25297], 60.00th=[28443], 00:29:39.227 | 70.00th=[29492], 80.00th=[31589], 90.00th=[34866], 95.00th=[35914], 00:29:39.227 | 99.00th=[42206], 99.50th=[44827], 99.90th=[44827], 99.95th=[45876], 00:29:39.227 | 99.99th=[46400] 00:29:39.227 write: IOPS=2558, BW=10.00MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:29:39.227 slat (usec): min=4, max=23445, avg=177.67, stdev=1082.68 00:29:39.227 clat (usec): min=1040, max=58810, avg=24477.18, stdev=10010.02 00:29:39.227 lat (usec): min=8401, max=58823, avg=24654.85, stdev=10088.86 00:29:39.227 clat percentiles (usec): 00:29:39.227 | 1.00th=[11338], 5.00th=[11731], 10.00th=[11994], 20.00th=[15401], 00:29:39.227 | 30.00th=[17433], 40.00th=[21365], 50.00th=[23725], 60.00th=[24249], 00:29:39.227 | 70.00th=[26870], 80.00th=[30278], 90.00th=[38011], 95.00th=[44303], 00:29:39.227 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[57410], 00:29:39.227 | 99.99th=[58983] 00:29:39.227 bw ( KiB/s): min= 9920, max=10560, per=15.60%, avg=10240.00, stdev=452.55, samples=2 00:29:39.227 iops : min= 2480, max= 2640, avg=2560.00, stdev=113.14, samples=2 00:29:39.227 lat (msec) : 2=0.02%, 10=0.14%, 20=33.04%, 50=64.58%, 100=2.22% 00:29:39.227 cpu : usr=3.20%, sys=4.20%, ctx=217, majf=0, minf=2 00:29:39.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:39.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.227 issued rwts: total=2560,2564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.227 job1: (groupid=0, jobs=1): err= 0: pid=2660194: Wed Nov 20 07:31:42 2024 00:29:39.227 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:29:39.227 slat (usec): min=2, max=3515, avg=84.67, stdev=445.86 00:29:39.227 clat (usec): min=7950, max=14911, avg=11031.72, stdev=991.63 00:29:39.227 lat (usec): min=7963, max=14915, avg=11116.39, stdev=1041.13 00:29:39.227 clat percentiles (usec): 00:29:39.227 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:29:39.227 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:29:39.227 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12387], 95.00th=[12780], 00:29:39.227 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14484], 99.95th=[14746], 00:29:39.227 | 99.99th=[14877] 00:29:39.227 write: IOPS=5700, BW=22.3MiB/s (23.3MB/s)(22.4MiB/1004msec); 0 zone resets 00:29:39.227 slat (usec): min=4, max=9454, avg=82.79, stdev=380.41 00:29:39.227 clat (usec): min=3028, max=19497, avg=11280.81, stdev=1526.99 00:29:39.227 lat (usec): min=3057, max=19503, avg=11363.60, stdev=1543.14 00:29:39.227 clat percentiles (usec): 00:29:39.227 | 1.00th=[ 6652], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[10683], 00:29:39.227 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:29:39.227 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12256], 95.00th=[13960], 00:29:39.227 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19530], 99.95th=[19530], 00:29:39.227 | 99.99th=[19530] 00:29:39.227 bw ( KiB/s): min=21424, max=23584, per=34.28%, avg=22504.00, stdev=1527.35, samples=2 00:29:39.227 iops : min= 5356, max= 5896, avg=5626.00, stdev=381.84, samples=2 00:29:39.227 lat (msec) : 4=0.09%, 10=10.78%, 20=89.13% 00:29:39.227 cpu : usr=5.48%, sys=10.37%, ctx=635, majf=0, minf=1 00:29:39.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:29:39.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.228 issued rwts: total=5632,5723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.228 job2: (groupid=0, jobs=1): err= 0: pid=2660195: Wed Nov 20 07:31:42 2024 00:29:39.228 read: IOPS=2616, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1003msec) 00:29:39.228 slat (usec): min=3, max=11722, avg=169.67, stdev=1056.10 00:29:39.228 clat (usec): min=1399, max=35123, avg=21229.94, stdev=3868.98 00:29:39.228 lat (usec): min=4391, max=35139, avg=21399.61, stdev=3930.79 00:29:39.228 clat percentiles (usec): 00:29:39.228 | 1.00th=[11600], 5.00th=[15270], 10.00th=[16909], 20.00th=[18220], 00:29:39.228 | 30.00th=[19268], 40.00th=[20055], 50.00th=[20579], 60.00th=[21890], 00:29:39.228 | 70.00th=[22676], 80.00th=[24511], 90.00th=[26084], 95.00th=[27132], 00:29:39.228 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32375], 99.95th=[34341], 00:29:39.228 | 99.99th=[34866] 00:29:39.228 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:29:39.228 slat (usec): min=3, max=10663, avg=172.38, stdev=935.60 00:29:39.228 clat (usec): min=8475, max=47057, avg=23079.32, stdev=7821.54 00:29:39.228 lat (usec): min=8480, max=47068, avg=23251.70, stdev=7905.94 00:29:39.228 clat percentiles (usec): 00:29:39.228 | 1.00th=[10683], 5.00th=[14877], 10.00th=[16188], 20.00th=[18744], 00:29:39.228 | 30.00th=[19268], 40.00th=[19530], 50.00th=[20055], 60.00th=[20841], 00:29:39.228 | 70.00th=[24249], 80.00th=[25297], 90.00th=[38536], 95.00th=[41157], 00:29:39.228 | 99.00th=[44827], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:29:39.228 | 99.99th=[46924] 00:29:39.228 bw ( KiB/s): min=11736, max=12311, per=18.31%, avg=12023.50, stdev=406.59, samples=2 00:29:39.228 iops : min= 2934, max= 3077, avg=3005.50, stdev=101.12, samples=2 00:29:39.228 lat (msec) : 2=0.02%, 10=0.37%, 20=42.52%, 50=57.09% 00:29:39.228 cpu : usr=3.79%, sys=4.49%, ctx=267, majf=0, minf=1 00:29:39.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:29:39.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.228 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.228 job3: (groupid=0, jobs=1): err= 0: pid=2660196: Wed Nov 20 07:31:42 2024 00:29:39.228 read: IOPS=4729, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1003msec) 00:29:39.228 slat (usec): min=2, max=6486, avg=98.79, stdev=619.52 00:29:39.228 clat (usec): min=1235, max=19206, avg=12616.30, stdev=2137.66 00:29:39.228 lat (usec): min=3397, max=19212, avg=12715.09, stdev=2174.32 00:29:39.228 clat percentiles (usec): 00:29:39.228 | 1.00th=[ 6849], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:29:39.228 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:29:39.228 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15795], 95.00th=[16909], 00:29:39.228 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:29:39.228 | 99.99th=[19268] 00:29:39.228 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:29:39.228 slat (usec): min=4, max=6420, avg=96.48, stdev=548.55 00:29:39.228 clat (usec): min=6248, max=31047, avg=13088.16, stdev=2849.54 00:29:39.228 lat (usec): min=6256, max=31056, avg=13184.65, stdev=2895.92 00:29:39.228 clat percentiles (usec): 00:29:39.228 | 1.00th=[ 7832], 5.00th=[10290], 10.00th=[11338], 20.00th=[11863], 00:29:39.228 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:29:39.228 | 70.00th=[13042], 80.00th=[13304], 90.00th=[15008], 95.00th=[17957], 00:29:39.228 | 99.00th=[26608], 99.50th=[27657], 99.90th=[31065], 99.95th=[31065], 00:29:39.228 | 99.99th=[31065] 00:29:39.228 bw ( KiB/s): min=20480, max=20480, per=31.19%, avg=20480.00, stdev= 0.00, samples=2 00:29:39.228 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:29:39.228 lat (msec) : 2=0.01%, 4=0.15%, 10=4.97%, 20=93.09%, 50=1.78% 00:29:39.228 cpu : usr=5.29%, sys=7.39%, ctx=424, majf=0, minf=2 00:29:39.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:39.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.228 issued rwts: total=4744,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.228 00:29:39.228 Run status group 0 (all jobs): 00:29:39.228 READ: bw=60.5MiB/s (63.5MB/s), 9.98MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=60.8MiB (63.7MB), run=1002-1004msec 00:29:39.228 WRITE: bw=64.1MiB/s (67.2MB/s), 10.00MiB/s-22.3MiB/s (10.5MB/s-23.3MB/s), io=64.4MiB (67.5MB), run=1002-1004msec 00:29:39.228 00:29:39.228 Disk stats (read/write): 00:29:39.228 nvme0n1: ios=1977/2048, merge=0/0, ticks=18624/15453, in_queue=34077, util=98.80% 00:29:39.228 nvme0n2: ios=4648/5020, merge=0/0, ticks=16622/17717, in_queue=34339, util=97.87% 00:29:39.228 nvme0n3: ios=2608/2631, merge=0/0, ticks=27458/25382, in_queue=52840, util=99.06% 00:29:39.228 nvme0n4: ios=4122/4183, merge=0/0, ticks=25554/26140, in_queue=51694, util=94.96% 00:29:39.228 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:39.228 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2660336 00:29:39.228 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:39.228 07:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:39.228 [global] 00:29:39.228 thread=1 00:29:39.228 invalidate=1 00:29:39.228 rw=read 00:29:39.228 time_based=1 00:29:39.228 runtime=10 00:29:39.228 ioengine=libaio 00:29:39.228 direct=1 00:29:39.228 bs=4096 00:29:39.228 iodepth=1 00:29:39.228 norandommap=1 00:29:39.228 numjobs=1 00:29:39.228 00:29:39.228 [job0] 00:29:39.228 filename=/dev/nvme0n1 00:29:39.228 [job1] 00:29:39.228 filename=/dev/nvme0n2 00:29:39.228 [job2] 00:29:39.228 filename=/dev/nvme0n3 00:29:39.228 [job3] 00:29:39.228 filename=/dev/nvme0n4 00:29:39.228 Could not set queue depth (nvme0n1) 00:29:39.228 Could not set queue depth (nvme0n2) 00:29:39.228 Could not set queue depth (nvme0n3) 00:29:39.228 Could not set queue depth (nvme0n4) 00:29:39.485 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.485 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.485 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.485 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.485 fio-3.35 00:29:39.485 Starting 4 threads 00:29:42.762 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:42.762 07:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:42.763 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3309568, buflen=4096 00:29:42.763 fio: pid=2660429, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:42.763 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:42.763 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:42.763 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37609472, buflen=4096 00:29:42.763 fio: pid=2660428, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.021 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55128064, buflen=4096 00:29:43.021 fio: pid=2660424, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.021 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:43.021 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:43.278 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=46288896, buflen=4096 00:29:43.278 fio: pid=2660425, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.278 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:43.278 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:43.536 00:29:43.536 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2660424: Wed Nov 20 07:31:46 2024 00:29:43.536 read: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(52.6MiB/3485msec) 00:29:43.536 slat (usec): min=4, max=31685, avg=12.87, stdev=331.89 00:29:43.536 clat (usec): min=190, max=2353, avg=242.42, stdev=30.13 00:29:43.536 lat (usec): min=195, max=31966, avg=255.28, stdev=333.88 00:29:43.536 clat percentiles (usec): 00:29:43.536 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:29:43.536 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:29:43.536 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 281], 00:29:43.536 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[ 537], 99.95th=[ 570], 00:29:43.536 | 99.99th=[ 676] 00:29:43.536 bw ( KiB/s): min=14424, max=16568, per=42.38%, avg=15552.00, stdev=778.40, samples=6 00:29:43.536 iops : min= 3606, max= 4142, avg=3888.00, stdev=194.60, samples=6 00:29:43.536 lat (usec) : 250=75.41%, 500=24.46%, 750=0.12% 00:29:43.536 lat (msec) : 4=0.01% 00:29:43.536 cpu : usr=0.98%, sys=3.90%, ctx=13464, majf=0, minf=1 00:29:43.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 issued rwts: total=13460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.537 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2660425: Wed Nov 20 07:31:46 2024 00:29:43.537 read: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(44.1MiB/3788msec) 00:29:43.537 slat (usec): min=5, max=21557, avg=14.89, stdev=261.94 00:29:43.537 clat (usec): min=188, max=49223, avg=314.79, stdev=1465.72 00:29:43.537 lat (usec): min=194, max=52765, avg=329.68, stdev=1531.32 00:29:43.537 clat percentiles (usec): 00:29:43.537 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:29:43.537 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:29:43.537 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 367], 00:29:43.537 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[41157], 99.95th=[41157], 00:29:43.537 | 99.99th=[41157] 00:29:43.537 bw ( KiB/s): min= 392, max=15744, per=33.23%, avg=12195.43, stdev=5312.78, samples=7 00:29:43.537 iops : min= 98, max= 3936, avg=3048.86, stdev=1328.20, samples=7 00:29:43.537 lat (usec) : 250=51.03%, 500=47.31%, 750=1.48%, 1000=0.03% 00:29:43.537 lat (msec) : 2=0.02%, 20=0.01%, 50=0.12% 00:29:43.537 cpu : usr=1.98%, sys=4.70%, ctx=11309, majf=0, minf=2 00:29:43.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 issued rwts: total=11302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.537 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2660428: Wed Nov 20 07:31:46 2024 00:29:43.537 read: IOPS=2873, BW=11.2MiB/s (11.8MB/s)(35.9MiB/3196msec) 00:29:43.537 slat (usec): min=5, max=10665, avg=12.69, stdev=145.33 00:29:43.537 clat (usec): min=227, max=998, avg=330.10, stdev=69.56 00:29:43.537 lat (usec): min=234, max=10995, avg=342.79, stdev=162.16 00:29:43.537 clat percentiles (usec): 00:29:43.537 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:29:43.537 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 318], 00:29:43.537 | 70.00th=[ 343], 80.00th=[ 371], 90.00th=[ 416], 95.00th=[ 498], 00:29:43.537 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 660], 99.95th=[ 693], 00:29:43.537 | 99.99th=[ 996] 00:29:43.537 bw ( KiB/s): min=10040, max=12912, per=31.40%, avg=11521.33, stdev=1271.71, samples=6 00:29:43.537 iops : min= 2510, max= 3228, avg=2880.33, stdev=317.93, samples=6 00:29:43.537 lat (usec) : 250=0.47%, 500=95.08%, 750=4.40%, 1000=0.04% 00:29:43.537 cpu : usr=2.22%, sys=4.35%, ctx=9186, majf=0, minf=1 00:29:43.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 issued rwts: total=9183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.537 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2660429: Wed Nov 20 07:31:46 2024 00:29:43.537 read: IOPS=276, BW=1105KiB/s (1131kB/s)(3232KiB/2926msec) 00:29:43.537 slat (nsec): min=6654, max=48793, avg=16079.83, stdev=7224.86 00:29:43.537 clat (usec): min=266, max=42131, avg=3571.05, stdev=10891.76 00:29:43.537 lat (usec): min=274, max=42146, avg=3587.13, stdev=10892.94 00:29:43.537 clat percentiles (usec): 00:29:43.537 | 1.00th=[ 281], 5.00th=[ 338], 10.00th=[ 355], 20.00th=[ 371], 00:29:43.537 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 408], 00:29:43.537 | 70.00th=[ 420], 80.00th=[ 457], 90.00th=[ 562], 95.00th=[41157], 00:29:43.537 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:29:43.537 | 99.99th=[42206] 00:29:43.537 bw ( KiB/s): min= 96, max= 5952, per=3.47%, avg=1275.20, stdev=2614.42, samples=5 00:29:43.537 iops : min= 24, max= 1488, avg=318.80, stdev=653.60, samples=5 00:29:43.537 lat (usec) : 500=83.81%, 750=8.28% 00:29:43.537 lat (msec) : 50=7.79% 00:29:43.537 cpu : usr=0.24%, sys=0.65%, ctx=810, majf=0, minf=2 00:29:43.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.537 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.537 00:29:43.537 Run status group 0 (all jobs): 00:29:43.537 READ: bw=35.8MiB/s (37.6MB/s), 1105KiB/s-15.1MiB/s (1131kB/s-15.8MB/s), io=136MiB (142MB), run=2926-3788msec 00:29:43.537 00:29:43.537 Disk stats (read/write): 00:29:43.537 nvme0n1: ios=13091/0, merge=0/0, ticks=3105/0, in_queue=3105, util=94.65% 00:29:43.537 nvme0n2: ios=10725/0, merge=0/0, ticks=4404/0, in_queue=4404, util=98.37% 00:29:43.537 nvme0n3: ios=8951/0, merge=0/0, ticks=2847/0, in_queue=2847, util=96.20% 00:29:43.537 nvme0n4: ios=859/0, merge=0/0, ticks=3005/0, in_queue=3005, util=99.32% 00:29:43.537 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:43.537 07:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:44.102 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.102 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:44.102 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.102 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:44.667 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.667 07:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:44.667 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:44.667 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2660336 00:29:44.667 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:44.667 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:44.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:44.925 nvmf hotplug test: fio failed as expected 00:29:44.925 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.182 rmmod nvme_tcp 00:29:45.182 rmmod nvme_fabrics 00:29:45.182 rmmod nvme_keyring 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2658440 ']' 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2658440 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2658440 ']' 00:29:45.182 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2658440 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2658440 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2658440' 00:29:45.183 killing process with pid 2658440 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2658440 00:29:45.183 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2658440 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.441 07:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.976 00:29:47.976 real 0m23.768s 00:29:47.976 user 1m5.879s 00:29:47.976 sys 0m11.321s 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.976 ************************************ 00:29:47.976 END TEST nvmf_fio_target 00:29:47.976 ************************************ 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.976 ************************************ 00:29:47.976 START TEST nvmf_bdevio 00:29:47.976 ************************************ 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:47.976 * Looking for test storage... 00:29:47.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.976 07:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.976 --rc genhtml_branch_coverage=1 00:29:47.976 --rc genhtml_function_coverage=1 00:29:47.976 --rc genhtml_legend=1 00:29:47.976 --rc geninfo_all_blocks=1 00:29:47.976 --rc geninfo_unexecuted_blocks=1 00:29:47.976 00:29:47.976 ' 00:29:47.976 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.976 --rc genhtml_branch_coverage=1 00:29:47.977 --rc genhtml_function_coverage=1 00:29:47.977 --rc genhtml_legend=1 00:29:47.977 --rc geninfo_all_blocks=1 00:29:47.977 --rc geninfo_unexecuted_blocks=1 00:29:47.977 00:29:47.977 ' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.977 --rc genhtml_branch_coverage=1 00:29:47.977 --rc genhtml_function_coverage=1 00:29:47.977 --rc genhtml_legend=1 00:29:47.977 --rc geninfo_all_blocks=1 00:29:47.977 --rc geninfo_unexecuted_blocks=1 00:29:47.977 00:29:47.977 ' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.977 --rc genhtml_branch_coverage=1 00:29:47.977 --rc genhtml_function_coverage=1 00:29:47.977 --rc genhtml_legend=1 00:29:47.977 --rc geninfo_all_blocks=1 00:29:47.977 --rc geninfo_unexecuted_blocks=1 00:29:47.977 00:29:47.977 ' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.977 07:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:49.880 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:49.881 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:49.881 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:49.881 Found net devices under 0000:09:00.0: cvl_0_0 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:49.881 Found net devices under 0000:09:00.1: cvl_0_1 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.881 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:29:50.139 00:29:50.139 --- 10.0.0.2 ping statistics --- 00:29:50.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.139 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:29:50.139 00:29:50.139 --- 10.0.0.1 ping statistics --- 00:29:50.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.139 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.139 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2663159 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2663159 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2663159 ']' 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:50.140 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.140 [2024-11-20 07:31:53.471943] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:50.140 [2024-11-20 07:31:53.473043] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:29:50.140 [2024-11-20 07:31:53.473112] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.140 [2024-11-20 07:31:53.546468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.398 [2024-11-20 07:31:53.610171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.398 [2024-11-20 07:31:53.610222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.398 [2024-11-20 07:31:53.610250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.398 [2024-11-20 07:31:53.610262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.398 [2024-11-20 07:31:53.610272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.398 [2024-11-20 07:31:53.612004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.398 [2024-11-20 07:31:53.612058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:50.398 [2024-11-20 07:31:53.612084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:50.398 [2024-11-20 07:31:53.612090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.398 [2024-11-20 07:31:53.714606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:50.398 [2024-11-20 07:31:53.714837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:50.398 [2024-11-20 07:31:53.715130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:50.398 [2024-11-20 07:31:53.715831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:50.398 [2024-11-20 07:31:53.716062] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.398 [2024-11-20 07:31:53.768780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.398 Malloc0 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.398 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.657 [2024-11-20 07:31:53.844990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.657 { 00:29:50.657 "params": { 00:29:50.657 "name": "Nvme$subsystem", 00:29:50.657 "trtype": "$TEST_TRANSPORT", 00:29:50.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.657 "adrfam": "ipv4", 00:29:50.657 "trsvcid": "$NVMF_PORT", 00:29:50.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.657 "hdgst": ${hdgst:-false}, 00:29:50.657 "ddgst": ${ddgst:-false} 00:29:50.657 }, 00:29:50.657 "method": "bdev_nvme_attach_controller" 00:29:50.657 } 00:29:50.657 EOF 00:29:50.657 )") 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:50.657 07:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.657 "params": { 00:29:50.657 "name": "Nvme1", 00:29:50.657 "trtype": "tcp", 00:29:50.657 "traddr": "10.0.0.2", 00:29:50.657 "adrfam": "ipv4", 00:29:50.657 "trsvcid": "4420", 00:29:50.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.657 "hdgst": false, 00:29:50.657 "ddgst": false 00:29:50.657 }, 00:29:50.657 "method": "bdev_nvme_attach_controller" 00:29:50.657 }' 00:29:50.657 [2024-11-20 07:31:53.895185] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:29:50.657 [2024-11-20 07:31:53.895252] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663202 ] 00:29:50.657 [2024-11-20 07:31:53.964024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:50.657 [2024-11-20 07:31:54.027236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.657 [2024-11-20 07:31:54.027285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.657 [2024-11-20 07:31:54.027289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.915 I/O targets: 00:29:50.915 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:50.915 00:29:50.915 00:29:50.915 CUnit - A unit testing framework for C - Version 2.1-3 00:29:50.915 http://cunit.sourceforge.net/ 00:29:50.915 00:29:50.915 00:29:50.915 Suite: bdevio tests on: Nvme1n1 00:29:51.173 Test: blockdev write read block ...passed 00:29:51.173 Test: blockdev write zeroes read block ...passed 00:29:51.173 Test: blockdev write zeroes read no split ...passed 00:29:51.173 Test: blockdev write zeroes read split ...passed 00:29:51.173 Test: blockdev write zeroes read split partial ...passed 00:29:51.173 Test: blockdev reset ...[2024-11-20 07:31:54.483206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:51.173 [2024-11-20 07:31:54.483318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c0640 (9): Bad file descriptor 00:29:51.173 [2024-11-20 07:31:54.528604] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:51.173 passed 00:29:51.173 Test: blockdev write read 8 blocks ...passed 00:29:51.173 Test: blockdev write read size > 128k ...passed 00:29:51.173 Test: blockdev write read invalid size ...passed 00:29:51.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:51.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:51.431 Test: blockdev write read max offset ...passed 00:29:51.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:51.431 Test: blockdev writev readv 8 blocks ...passed 00:29:51.431 Test: blockdev writev readv 30 x 1block ...passed 00:29:51.431 Test: blockdev writev readv block ...passed 00:29:51.431 Test: blockdev writev readv size > 128k ...passed 00:29:51.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:51.431 Test: blockdev comparev and writev ...[2024-11-20 07:31:54.742522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.742562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.742587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.742605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.743004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.743026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.743042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.743421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.743446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.743468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.743847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.743872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:51.431 [2024-11-20 07:31:54.743894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.431 [2024-11-20 07:31:54.743910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:51.431 passed 00:29:51.431 Test: blockdev nvme passthru rw ...passed 00:29:51.431 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:31:54.826573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.432 [2024-11-20 07:31:54.826599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:51.432 [2024-11-20 07:31:54.826741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.432 [2024-11-20 07:31:54.826765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:51.432 [2024-11-20 07:31:54.826916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.432 [2024-11-20 07:31:54.826940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:51.432 [2024-11-20 07:31:54.827083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.432 [2024-11-20 07:31:54.827106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:51.432 passed 00:29:51.432 Test: blockdev nvme admin passthru ...passed 00:29:51.689 Test: blockdev copy ...passed 00:29:51.689 00:29:51.689 Run Summary: Type Total Ran Passed Failed Inactive 00:29:51.689 suites 1 1 n/a 0 0 00:29:51.689 tests 23 23 23 0 0 00:29:51.689 asserts 152 152 152 0 n/a 00:29:51.689 00:29:51.689 Elapsed time = 1.097 seconds 00:29:51.689 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.689 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.689 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:51.689 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.689 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:51.689 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:51.690 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.690 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:51.690 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.690 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:51.690 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.690 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.690 rmmod nvme_tcp 00:29:51.690 rmmod nvme_fabrics 00:29:51.690 rmmod nvme_keyring 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2663159 ']' 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2663159 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2663159 ']' 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2663159 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2663159 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2663159' 00:29:51.948 killing process with pid 2663159 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2663159 00:29:51.948 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2663159 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.208 07:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.116 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.116 00:29:54.116 real 0m6.528s 00:29:54.116 user 0m8.840s 00:29:54.116 sys 0m2.610s 00:29:54.116 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:54.116 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:54.116 ************************************ 00:29:54.116 END TEST nvmf_bdevio 00:29:54.116 ************************************ 00:29:54.116 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:54.116 00:29:54.116 real 3m55.652s 00:29:54.116 user 8m53.006s 00:29:54.116 sys 1m25.713s 00:29:54.116 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:54.116 07:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:54.116 ************************************ 00:29:54.116 END TEST nvmf_target_core_interrupt_mode 00:29:54.116 ************************************ 00:29:54.116 07:31:57 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:54.116 07:31:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:54.116 07:31:57 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:54.116 07:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:54.116 ************************************ 00:29:54.116 START TEST nvmf_interrupt 00:29:54.116 ************************************ 00:29:54.116 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:54.374 * Looking for test storage... 00:29:54.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:29:54.374 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.375 --rc genhtml_branch_coverage=1 00:29:54.375 --rc genhtml_function_coverage=1 00:29:54.375 --rc genhtml_legend=1 00:29:54.375 --rc geninfo_all_blocks=1 00:29:54.375 --rc geninfo_unexecuted_blocks=1 00:29:54.375 00:29:54.375 ' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.375 --rc genhtml_branch_coverage=1 00:29:54.375 --rc genhtml_function_coverage=1 00:29:54.375 --rc genhtml_legend=1 00:29:54.375 --rc geninfo_all_blocks=1 00:29:54.375 --rc geninfo_unexecuted_blocks=1 00:29:54.375 00:29:54.375 ' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.375 --rc genhtml_branch_coverage=1 00:29:54.375 --rc genhtml_function_coverage=1 00:29:54.375 --rc genhtml_legend=1 00:29:54.375 --rc geninfo_all_blocks=1 00:29:54.375 --rc geninfo_unexecuted_blocks=1 00:29:54.375 00:29:54.375 ' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.375 --rc genhtml_branch_coverage=1 00:29:54.375 --rc genhtml_function_coverage=1 00:29:54.375 --rc genhtml_legend=1 00:29:54.375 --rc geninfo_all_blocks=1 00:29:54.375 --rc geninfo_unexecuted_blocks=1 00:29:54.375 00:29:54.375 ' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.375 07:31:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.963 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:56.964 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:56.964 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:56.964 Found net devices under 0000:09:00.0: cvl_0_0 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:56.964 Found net devices under 0000:09:00.1: cvl_0_1 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:29:56.964 00:29:56.964 --- 10.0.0.2 ping statistics --- 00:29:56.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.964 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:29:56.964 00:29:56.964 --- 10.0.0.1 ping statistics --- 00:29:56.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.964 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2665297 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2665297 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 2665297 ']' 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.964 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:56.965 07:31:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 [2024-11-20 07:31:59.995696] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:56.965 [2024-11-20 07:31:59.996804] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:29:56.965 [2024-11-20 07:31:59.996869] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.965 [2024-11-20 07:32:00.074018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:56.965 [2024-11-20 07:32:00.130862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.965 [2024-11-20 07:32:00.130931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.965 [2024-11-20 07:32:00.130944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.965 [2024-11-20 07:32:00.130968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.965 [2024-11-20 07:32:00.130977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.965 [2024-11-20 07:32:00.132418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.965 [2024-11-20 07:32:00.132424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.965 [2024-11-20 07:32:00.228268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:56.965 [2024-11-20 07:32:00.228269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:56.965 [2024-11-20 07:32:00.228590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:56.965 5000+0 records in 00:29:56.965 5000+0 records out 00:29:56.965 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0119834 s, 855 MB/s 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 AIO0 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 [2024-11-20 07:32:00.333101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.965 [2024-11-20 07:32:00.357342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2665297 0 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2665297 0 idle 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:29:56.965 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665297 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0' 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665297 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2665297 1 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2665297 1 idle 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:29:57.223 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665303 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665303 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2665461 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2665297 0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2665297 0 busy 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665297 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.48 reactor_0' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665297 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.48 reactor_0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2665297 1 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2665297 1 busy 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:29:57.482 07:32:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665303 root 20 0 128.2g 48384 34944 R 93.3 0.1 0:00.25 reactor_1' 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665303 root 20 0 128.2g 48384 34944 R 93.3 0.1 0:00.25 reactor_1 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.741 07:32:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2665461 00:30:07.711 Initializing NVMe Controllers 00:30:07.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.711 Controller IO queue size 256, less than required. 00:30:07.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:07.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:07.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:07.711 Initialization complete. Launching workers. 00:30:07.711 ======================================================== 00:30:07.711 Latency(us) 00:30:07.711 Device Information : IOPS MiB/s Average min max 00:30:07.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13150.70 51.37 19480.95 4194.67 24087.06 00:30:07.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13942.60 54.46 18373.22 4498.11 22252.82 00:30:07.711 ======================================================== 00:30:07.711 Total : 27093.29 105.83 18910.90 4194.67 24087.06 00:30:07.711 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2665297 0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2665297 0 idle 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665297 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:19.75 reactor_0' 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665297 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:19.75 reactor_0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2665297 1 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2665297 1 idle 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:30:07.711 07:32:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665303 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.50 reactor_1' 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665303 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.50 reactor_1 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:07.970 07:32:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2665297 0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2665297 0 idle 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665297 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:19.85 reactor_0' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665297 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:19.85 reactor_0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2665297 1 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2665297 1 idle 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2665297 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2665297 -w 256 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2665303 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:09.53 reactor_1' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2665303 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:09.53 reactor_1 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:10.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.501 rmmod nvme_tcp 00:30:10.501 rmmod nvme_fabrics 00:30:10.501 rmmod nvme_keyring 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2665297 ']' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2665297 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 2665297 ']' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 2665297 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:10.501 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2665297 00:30:10.759 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:10.759 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:10.759 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2665297' 00:30:10.759 killing process with pid 2665297 00:30:10.759 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 2665297 00:30:10.759 07:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 2665297 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.017 07:32:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.922 07:32:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.922 00:30:12.922 real 0m18.724s 00:30:12.922 user 0m36.696s 00:30:12.922 sys 0m6.745s 00:30:12.922 07:32:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:12.922 07:32:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:12.922 ************************************ 00:30:12.922 END TEST nvmf_interrupt 00:30:12.922 ************************************ 00:30:12.922 00:30:12.922 real 24m58.776s 00:30:12.922 user 58m34.769s 00:30:12.922 sys 6m45.011s 00:30:12.922 07:32:16 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:12.922 07:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.922 ************************************ 00:30:12.922 END TEST nvmf_tcp 00:30:12.922 ************************************ 00:30:12.922 07:32:16 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:30:12.922 07:32:16 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:12.922 07:32:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:12.922 07:32:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:12.922 07:32:16 -- common/autotest_common.sh@10 -- # set +x 00:30:12.922 ************************************ 00:30:12.922 START TEST spdkcli_nvmf_tcp 00:30:12.922 ************************************ 00:30:12.922 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:13.181 * Looking for test storage... 00:30:13.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:13.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.181 --rc genhtml_branch_coverage=1 00:30:13.181 --rc genhtml_function_coverage=1 00:30:13.181 --rc genhtml_legend=1 00:30:13.181 --rc geninfo_all_blocks=1 00:30:13.181 --rc geninfo_unexecuted_blocks=1 00:30:13.181 00:30:13.181 ' 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:13.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.181 --rc genhtml_branch_coverage=1 00:30:13.181 --rc genhtml_function_coverage=1 00:30:13.181 --rc genhtml_legend=1 00:30:13.181 --rc geninfo_all_blocks=1 00:30:13.181 --rc geninfo_unexecuted_blocks=1 00:30:13.181 00:30:13.181 ' 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:13.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.181 --rc genhtml_branch_coverage=1 00:30:13.181 --rc genhtml_function_coverage=1 00:30:13.181 --rc genhtml_legend=1 00:30:13.181 --rc geninfo_all_blocks=1 00:30:13.181 --rc geninfo_unexecuted_blocks=1 00:30:13.181 00:30:13.181 ' 00:30:13.181 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:13.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.181 --rc genhtml_branch_coverage=1 00:30:13.181 --rc genhtml_function_coverage=1 00:30:13.181 --rc genhtml_legend=1 00:30:13.181 --rc geninfo_all_blocks=1 00:30:13.181 --rc geninfo_unexecuted_blocks=1 00:30:13.181 00:30:13.181 ' 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:13.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2667459 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2667459 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2667459 ']' 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.182 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.182 [2024-11-20 07:32:16.521458] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:30:13.182 [2024-11-20 07:32:16.521548] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667459 ] 00:30:13.182 [2024-11-20 07:32:16.586203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:13.440 [2024-11-20 07:32:16.645617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.440 [2024-11-20 07:32:16.645621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.440 07:32:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:13.440 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:13.440 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:13.440 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:13.440 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:13.440 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:13.440 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:13.440 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:13.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:13.440 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:13.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:13.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:13.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:13.441 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:13.441 ' 00:30:15.968 [2024-11-20 07:32:19.393294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.340 [2024-11-20 07:32:20.665667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:19.867 [2024-11-20 07:32:23.008873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:21.767 [2024-11-20 07:32:25.023009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:23.141 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:23.141 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:23.141 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:23.141 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:23.141 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:23.141 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:23.141 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:23.141 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:23.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:23.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:23.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.141 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:23.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.141 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:23.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:23.142 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:23.398 07:32:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:23.961 07:32:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:23.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:23.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:23.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:23.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:23.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:23.961 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:23.961 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:23.961 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:23.961 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:23.961 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:23.961 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:23.961 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:23.961 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:23.961 ' 00:30:29.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:29.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:29.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:29.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:29.217 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:29.217 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:29.217 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.217 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:29.217 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:29.217 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:29.217 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:29.217 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:29.217 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2667459 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2667459 ']' 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2667459 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2667459 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2667459' 00:30:29.217 killing process with pid 2667459 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2667459 00:30:29.217 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2667459 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2667459 ']' 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2667459 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2667459 ']' 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2667459 00:30:29.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2667459) - No such process 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 2667459 is not found' 00:30:29.475 Process with pid 2667459 is not found 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:29.475 00:30:29.475 real 0m16.503s 00:30:29.475 user 0m35.155s 00:30:29.475 sys 0m0.728s 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:29.475 07:32:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.475 ************************************ 00:30:29.475 END TEST spdkcli_nvmf_tcp 00:30:29.475 ************************************ 00:30:29.475 07:32:32 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:29.475 07:32:32 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:29.475 07:32:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:29.475 07:32:32 -- common/autotest_common.sh@10 -- # set +x 00:30:29.475 ************************************ 00:30:29.475 START TEST nvmf_identify_passthru 00:30:29.475 ************************************ 00:30:29.475 07:32:32 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:29.733 * Looking for test storage... 00:30:29.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.733 07:32:32 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:29.733 07:32:32 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:30:29.733 07:32:32 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:29.733 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.733 07:32:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:29.733 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.733 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:29.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.733 --rc genhtml_branch_coverage=1 00:30:29.733 --rc genhtml_function_coverage=1 00:30:29.733 --rc genhtml_legend=1 00:30:29.733 --rc geninfo_all_blocks=1 00:30:29.733 --rc geninfo_unexecuted_blocks=1 00:30:29.733 00:30:29.733 ' 00:30:29.733 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:29.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.734 --rc genhtml_branch_coverage=1 00:30:29.734 --rc genhtml_function_coverage=1 00:30:29.734 --rc genhtml_legend=1 00:30:29.734 --rc geninfo_all_blocks=1 00:30:29.734 --rc geninfo_unexecuted_blocks=1 00:30:29.734 00:30:29.734 ' 00:30:29.734 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:29.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.734 --rc genhtml_branch_coverage=1 00:30:29.734 --rc genhtml_function_coverage=1 00:30:29.734 --rc genhtml_legend=1 00:30:29.734 --rc geninfo_all_blocks=1 00:30:29.734 --rc geninfo_unexecuted_blocks=1 00:30:29.734 00:30:29.734 ' 00:30:29.734 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:29.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.734 --rc genhtml_branch_coverage=1 00:30:29.734 --rc genhtml_function_coverage=1 00:30:29.734 --rc genhtml_legend=1 00:30:29.734 --rc geninfo_all_blocks=1 00:30:29.734 --rc geninfo_unexecuted_blocks=1 00:30:29.734 00:30:29.734 ' 00:30:29.734 07:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:29.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.734 07:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.734 07:32:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:29.734 07:32:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.734 07:32:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.734 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:29.734 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.734 07:32:33 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.734 07:32:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.634 07:32:34 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:31.634 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:31.634 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:31.634 Found net devices under 0000:09:00.0: cvl_0_0 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:31.634 Found net devices under 0000:09:00.1: cvl_0_1 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.634 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:30:31.892 00:30:31.892 --- 10.0.0.2 ping statistics --- 00:30:31.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.892 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:30:31.892 00:30:31.892 --- 10.0.0.1 ping statistics --- 00:30:31.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.892 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.892 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:31.893 07:32:35 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:30:31.893 07:32:35 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:0b:00.0 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:31.893 07:32:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:36.078 07:32:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:30:36.078 07:32:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:30:36.078 07:32:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:36.078 07:32:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:40.261 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:40.261 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:40.261 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.261 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.261 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:40.261 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:40.261 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.261 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2671975 00:30:40.262 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:40.262 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.262 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2671975 00:30:40.262 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 2671975 ']' 00:30:40.262 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.262 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:40.262 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.262 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:40.262 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.262 [2024-11-20 07:32:43.558170] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:30:40.262 [2024-11-20 07:32:43.558252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.262 [2024-11-20 07:32:43.635337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.520 [2024-11-20 07:32:43.696993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.520 [2024-11-20 07:32:43.697039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.520 [2024-11-20 07:32:43.697072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.520 [2024-11-20 07:32:43.697084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.520 [2024-11-20 07:32:43.697094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.520 [2024-11-20 07:32:43.698582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.520 [2024-11-20 07:32:43.698641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.520 [2024-11-20 07:32:43.698706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.520 [2024-11-20 07:32:43.698709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:30:40.520 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.520 INFO: Log level set to 20 00:30:40.520 INFO: Requests: 00:30:40.520 { 00:30:40.520 "jsonrpc": "2.0", 00:30:40.520 "method": "nvmf_set_config", 00:30:40.520 "id": 1, 00:30:40.520 "params": { 00:30:40.520 "admin_cmd_passthru": { 00:30:40.520 "identify_ctrlr": true 00:30:40.520 } 00:30:40.520 } 00:30:40.520 } 00:30:40.520 00:30:40.520 INFO: response: 00:30:40.520 { 00:30:40.520 "jsonrpc": "2.0", 00:30:40.520 "id": 1, 00:30:40.520 "result": true 00:30:40.520 } 00:30:40.520 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.520 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.520 INFO: Setting log level to 20 00:30:40.520 INFO: Setting log level to 20 00:30:40.520 INFO: Log level set to 20 00:30:40.520 INFO: Log level set to 20 00:30:40.520 INFO: Requests: 00:30:40.520 { 00:30:40.520 "jsonrpc": "2.0", 00:30:40.520 "method": "framework_start_init", 00:30:40.520 "id": 1 00:30:40.520 } 00:30:40.520 00:30:40.520 INFO: Requests: 00:30:40.520 { 00:30:40.520 "jsonrpc": "2.0", 00:30:40.520 "method": "framework_start_init", 00:30:40.520 "id": 1 00:30:40.520 } 00:30:40.520 00:30:40.520 [2024-11-20 07:32:43.909662] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:40.520 INFO: response: 00:30:40.520 { 00:30:40.520 "jsonrpc": "2.0", 00:30:40.520 "id": 1, 00:30:40.520 "result": true 00:30:40.520 } 00:30:40.520 00:30:40.520 INFO: response: 00:30:40.520 { 00:30:40.520 "jsonrpc": "2.0", 00:30:40.520 "id": 1, 00:30:40.520 "result": true 00:30:40.520 } 00:30:40.520 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.520 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.520 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.521 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.521 INFO: Setting log level to 40 00:30:40.521 INFO: Setting log level to 40 00:30:40.521 INFO: Setting log level to 40 00:30:40.521 [2024-11-20 07:32:43.919784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.521 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.521 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:40.521 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.521 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:40.521 07:32:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:30:40.521 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.521 07:32:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 Nvme0n1 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 [2024-11-20 07:32:46.817616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:43.863 [ 00:30:43.863 { 00:30:43.863 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.863 "subtype": "Discovery", 00:30:43.863 "listen_addresses": [], 00:30:43.863 "allow_any_host": true, 00:30:43.863 "hosts": [] 00:30:43.863 }, 00:30:43.863 { 00:30:43.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.863 "subtype": "NVMe", 00:30:43.863 "listen_addresses": [ 00:30:43.863 { 00:30:43.863 "trtype": "TCP", 00:30:43.863 "adrfam": "IPv4", 00:30:43.863 "traddr": "10.0.0.2", 00:30:43.863 "trsvcid": "4420" 00:30:43.863 } 00:30:43.863 ], 00:30:43.863 "allow_any_host": true, 00:30:43.863 "hosts": [], 00:30:43.863 "serial_number": "SPDK00000000000001", 00:30:43.863 "model_number": "SPDK bdev Controller", 00:30:43.863 "max_namespaces": 1, 00:30:43.863 "min_cntlid": 1, 00:30:43.863 "max_cntlid": 65519, 00:30:43.863 "namespaces": [ 00:30:43.863 { 00:30:43.863 "nsid": 1, 00:30:43.863 "bdev_name": "Nvme0n1", 00:30:43.863 "name": "Nvme0n1", 00:30:43.863 "nguid": "F9F994C716A743B487B472BD62292309", 00:30:43.863 "uuid": "f9f994c7-16a7-43b4-87b4-72bd62292309" 00:30:43.863 } 00:30:43.863 ] 00:30:43.863 } 00:30:43.863 ] 00:30:43.863 07:32:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:43.863 07:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:43.863 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:30:43.863 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:43.863 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:43.863 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:44.121 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:44.121 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:30:44.121 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:44.121 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.122 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:44.122 07:32:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.122 rmmod nvme_tcp 00:30:44.122 rmmod nvme_fabrics 00:30:44.122 rmmod nvme_keyring 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2671975 ']' 00:30:44.122 07:32:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2671975 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 2671975 ']' 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 2671975 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2671975 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2671975' 00:30:44.122 killing process with pid 2671975 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 2671975 00:30:44.122 07:32:47 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 2671975 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:46.037 07:32:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.037 07:32:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:46.037 07:32:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.935 07:32:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.935 00:30:47.935 real 0m18.153s 00:30:47.935 user 0m26.677s 00:30:47.935 sys 0m3.176s 00:30:47.935 07:32:51 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.935 07:32:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:47.935 ************************************ 00:30:47.935 END TEST nvmf_identify_passthru 00:30:47.935 ************************************ 00:30:47.935 07:32:51 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:47.935 07:32:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:47.935 07:32:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:47.935 07:32:51 -- common/autotest_common.sh@10 -- # set +x 00:30:47.935 ************************************ 00:30:47.935 START TEST nvmf_dif 00:30:47.935 ************************************ 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:47.935 * Looking for test storage... 00:30:47.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:47.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.935 --rc genhtml_branch_coverage=1 00:30:47.935 --rc genhtml_function_coverage=1 00:30:47.935 --rc genhtml_legend=1 00:30:47.935 --rc geninfo_all_blocks=1 00:30:47.935 --rc geninfo_unexecuted_blocks=1 00:30:47.935 00:30:47.935 ' 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:47.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.935 --rc genhtml_branch_coverage=1 00:30:47.935 --rc genhtml_function_coverage=1 00:30:47.935 --rc genhtml_legend=1 00:30:47.935 --rc geninfo_all_blocks=1 00:30:47.935 --rc geninfo_unexecuted_blocks=1 00:30:47.935 00:30:47.935 ' 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:47.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.935 --rc genhtml_branch_coverage=1 00:30:47.935 --rc genhtml_function_coverage=1 00:30:47.935 --rc genhtml_legend=1 00:30:47.935 --rc geninfo_all_blocks=1 00:30:47.935 --rc geninfo_unexecuted_blocks=1 00:30:47.935 00:30:47.935 ' 00:30:47.935 07:32:51 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:47.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.935 --rc genhtml_branch_coverage=1 00:30:47.935 --rc genhtml_function_coverage=1 00:30:47.935 --rc genhtml_legend=1 00:30:47.935 --rc geninfo_all_blocks=1 00:30:47.935 --rc geninfo_unexecuted_blocks=1 00:30:47.935 00:30:47.935 ' 00:30:47.935 07:32:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.935 07:32:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.935 07:32:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.935 07:32:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.935 07:32:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.935 07:32:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:47.935 07:32:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.935 07:32:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:47.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.936 07:32:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:47.936 07:32:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:47.936 07:32:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:47.936 07:32:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:47.936 07:32:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.936 07:32:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.936 07:32:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:47.936 07:32:51 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:30:47.936 07:32:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.467 07:32:53 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:50.468 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:50.468 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:50.468 Found net devices under 0000:09:00.0: cvl_0_0 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:50.468 Found net devices under 0000:09:00.1: cvl_0_1 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:30:50.468 00:30:50.468 --- 10.0.0.2 ping statistics --- 00:30:50.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.468 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:30:50.468 00:30:50.468 --- 10.0.0.1 ping statistics --- 00:30:50.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.468 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:50.468 07:32:53 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:51.406 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:51.406 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:51.406 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:51.406 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:51.406 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:51.406 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:51.406 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:51.406 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:51.406 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:51.407 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:51.407 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:51.407 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:51.407 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:51.407 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:51.407 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:51.407 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:51.407 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.407 07:32:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:51.407 07:32:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2675244 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:51.407 07:32:54 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2675244 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 2675244 ']' 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:51.407 07:32:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:51.407 [2024-11-20 07:32:54.751250] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:30:51.407 [2024-11-20 07:32:54.751345] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.407 [2024-11-20 07:32:54.825201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.664 [2024-11-20 07:32:54.884509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.664 [2024-11-20 07:32:54.884577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.664 [2024-11-20 07:32:54.884606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.664 [2024-11-20 07:32:54.884623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.664 [2024-11-20 07:32:54.884633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.664 [2024-11-20 07:32:54.885234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.664 07:32:54 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:51.664 07:32:54 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:30:51.664 07:32:54 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.664 07:32:54 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:51.664 07:32:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:51.664 07:32:55 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.664 07:32:55 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:51.664 07:32:55 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:51.664 07:32:55 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.664 07:32:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:51.664 [2024-11-20 07:32:55.029117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.664 07:32:55 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.664 07:32:55 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:51.664 07:32:55 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:51.664 07:32:55 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:51.664 07:32:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:51.664 ************************************ 00:30:51.664 START TEST fio_dif_1_default 00:30:51.664 ************************************ 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:51.664 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:51.665 bdev_null0 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.665 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:51.665 [2024-11-20 07:32:55.093511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:51.922 { 00:30:51.922 "params": { 00:30:51.922 "name": "Nvme$subsystem", 00:30:51.922 "trtype": "$TEST_TRANSPORT", 00:30:51.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.922 "adrfam": "ipv4", 00:30:51.922 "trsvcid": "$NVMF_PORT", 00:30:51.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.922 "hdgst": ${hdgst:-false}, 00:30:51.922 "ddgst": ${ddgst:-false} 00:30:51.922 }, 00:30:51.922 "method": "bdev_nvme_attach_controller" 00:30:51.922 } 00:30:51.922 EOF 00:30:51.922 )") 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:51.922 "params": { 00:30:51.922 "name": "Nvme0", 00:30:51.922 "trtype": "tcp", 00:30:51.922 "traddr": "10.0.0.2", 00:30:51.922 "adrfam": "ipv4", 00:30:51.922 "trsvcid": "4420", 00:30:51.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.922 "hdgst": false, 00:30:51.922 "ddgst": false 00:30:51.922 }, 00:30:51.922 "method": "bdev_nvme_attach_controller" 00:30:51.922 }' 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:51.922 07:32:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.181 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:52.181 fio-3.35 00:30:52.181 Starting 1 thread 00:31:04.376 00:31:04.376 filename0: (groupid=0, jobs=1): err= 0: pid=2675473: Wed Nov 20 07:33:06 2024 00:31:04.376 read: IOPS=192, BW=772KiB/s (790kB/s)(7728KiB/10012msec) 00:31:04.376 slat (nsec): min=6758, max=67470, avg=8840.26, stdev=3252.08 00:31:04.376 clat (usec): min=537, max=42424, avg=20700.84, stdev=20374.76 00:31:04.376 lat (usec): min=545, max=42435, avg=20709.68, stdev=20374.52 00:31:04.376 clat percentiles (usec): 00:31:04.376 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 594], 00:31:04.376 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 742], 60.00th=[41157], 00:31:04.376 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:04.376 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:04.376 | 99.99th=[42206] 00:31:04.376 bw ( KiB/s): min= 704, max= 832, per=99.89%, avg=771.20, stdev=32.67, samples=20 00:31:04.376 iops : min= 176, max= 208, avg=192.80, stdev= 8.17, samples=20 00:31:04.376 lat (usec) : 750=50.05%, 1000=0.47% 00:31:04.376 lat (msec) : 4=0.21%, 50=49.28% 00:31:04.376 cpu : usr=91.29%, sys=8.41%, ctx=21, majf=0, minf=282 00:31:04.376 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:04.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:04.376 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:04.376 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:04.376 00:31:04.376 Run status group 0 (all jobs): 00:31:04.376 READ: bw=772KiB/s (790kB/s), 772KiB/s-772KiB/s (790kB/s-790kB/s), io=7728KiB (7913kB), run=10012-10012msec 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.376 00:31:04.376 real 0m11.324s 00:31:04.376 user 0m10.477s 00:31:04.376 sys 0m1.127s 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 ************************************ 00:31:04.376 END TEST fio_dif_1_default 00:31:04.376 ************************************ 00:31:04.376 07:33:06 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:04.376 07:33:06 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:04.376 07:33:06 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:04.376 07:33:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 ************************************ 00:31:04.376 START TEST fio_dif_1_multi_subsystems 00:31:04.376 ************************************ 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 bdev_null0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:04.376 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.377 [2024-11-20 07:33:06.457495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.377 bdev_null1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:04.377 { 00:31:04.377 "params": { 00:31:04.377 "name": "Nvme$subsystem", 00:31:04.377 "trtype": "$TEST_TRANSPORT", 00:31:04.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.377 "adrfam": "ipv4", 00:31:04.377 "trsvcid": "$NVMF_PORT", 00:31:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.377 "hdgst": ${hdgst:-false}, 00:31:04.377 "ddgst": ${ddgst:-false} 00:31:04.377 }, 00:31:04.377 "method": "bdev_nvme_attach_controller" 00:31:04.377 } 00:31:04.377 EOF 00:31:04.377 )") 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:04.377 { 00:31:04.377 "params": { 00:31:04.377 "name": "Nvme$subsystem", 00:31:04.377 "trtype": "$TEST_TRANSPORT", 00:31:04.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.377 "adrfam": "ipv4", 00:31:04.377 "trsvcid": "$NVMF_PORT", 00:31:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.377 "hdgst": ${hdgst:-false}, 00:31:04.377 "ddgst": ${ddgst:-false} 00:31:04.377 }, 00:31:04.377 "method": "bdev_nvme_attach_controller" 00:31:04.377 } 00:31:04.377 EOF 00:31:04.377 )") 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:04.377 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:04.377 "params": { 00:31:04.377 "name": "Nvme0", 00:31:04.377 "trtype": "tcp", 00:31:04.377 "traddr": "10.0.0.2", 00:31:04.377 "adrfam": "ipv4", 00:31:04.377 "trsvcid": "4420", 00:31:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:04.377 "hdgst": false, 00:31:04.377 "ddgst": false 00:31:04.377 }, 00:31:04.377 "method": "bdev_nvme_attach_controller" 00:31:04.377 },{ 00:31:04.377 "params": { 00:31:04.377 "name": "Nvme1", 00:31:04.377 "trtype": "tcp", 00:31:04.377 "traddr": "10.0.0.2", 00:31:04.377 "adrfam": "ipv4", 00:31:04.377 "trsvcid": "4420", 00:31:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:04.378 "hdgst": false, 00:31:04.378 "ddgst": false 00:31:04.378 }, 00:31:04.378 "method": "bdev_nvme_attach_controller" 00:31:04.378 }' 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:04.378 07:33:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.378 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:04.378 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:04.378 fio-3.35 00:31:04.378 Starting 2 threads 00:31:14.344 00:31:14.344 filename0: (groupid=0, jobs=1): err= 0: pid=2677079: Wed Nov 20 07:33:17 2024 00:31:14.344 read: IOPS=208, BW=834KiB/s (854kB/s)(8352KiB/10010msec) 00:31:14.344 slat (nsec): min=7072, max=39505, avg=9552.18, stdev=3632.05 00:31:14.344 clat (usec): min=531, max=46188, avg=19145.31, stdev=20284.85 00:31:14.344 lat (usec): min=539, max=46213, avg=19154.86, stdev=20284.59 00:31:14.344 clat percentiles (usec): 00:31:14.344 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 619], 00:31:14.344 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 725], 60.00th=[41157], 00:31:14.344 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:14.344 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:31:14.344 | 99.99th=[46400] 00:31:14.344 bw ( KiB/s): min= 704, max= 1024, per=50.59%, avg=833.60, stdev=78.71, samples=20 00:31:14.344 iops : min= 176, max= 256, avg=208.40, stdev=19.68, samples=20 00:31:14.344 lat (usec) : 750=51.01%, 1000=3.40% 00:31:14.344 lat (msec) : 2=0.19%, 50=45.40% 00:31:14.344 cpu : usr=95.10%, sys=4.57%, ctx=22, majf=0, minf=168 00:31:14.344 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.344 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.344 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:14.344 filename1: (groupid=0, jobs=1): err= 0: pid=2677080: Wed Nov 20 07:33:17 2024 00:31:14.344 read: IOPS=203, BW=815KiB/s (834kB/s)(8176KiB/10038msec) 00:31:14.344 slat (nsec): min=7066, max=44312, avg=9520.18, stdev=3987.50 00:31:14.344 clat (usec): min=518, max=46157, avg=19612.90, stdev=20300.31 00:31:14.344 lat (usec): min=526, max=46182, avg=19622.42, stdev=20300.02 00:31:14.344 clat percentiles (usec): 00:31:14.344 | 1.00th=[ 545], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 619], 00:31:14.344 | 30.00th=[ 652], 40.00th=[ 709], 50.00th=[ 832], 60.00th=[41157], 00:31:14.344 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:14.344 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[46400], 00:31:14.344 | 99.99th=[46400] 00:31:14.344 bw ( KiB/s): min= 768, max= 1152, per=49.56%, avg=816.00, stdev=99.31, samples=20 00:31:14.344 iops : min= 192, max= 288, avg=204.00, stdev=24.83, samples=20 00:31:14.344 lat (usec) : 750=45.69%, 1000=7.68% 00:31:14.344 lat (msec) : 2=0.05%, 50=46.58% 00:31:14.344 cpu : usr=95.07%, sys=4.61%, ctx=14, majf=0, minf=165 00:31:14.344 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.344 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.344 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:14.344 00:31:14.344 Run status group 0 (all jobs): 00:31:14.344 READ: bw=1647KiB/s (1686kB/s), 815KiB/s-834KiB/s (834kB/s-854kB/s), io=16.1MiB (16.9MB), run=10010-10038msec 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 00:31:14.605 real 0m11.413s 00:31:14.605 user 0m20.387s 00:31:14.605 sys 0m1.250s 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 ************************************ 00:31:14.605 END TEST fio_dif_1_multi_subsystems 00:31:14.605 ************************************ 00:31:14.605 07:33:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:14.605 07:33:17 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:14.605 07:33:17 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 ************************************ 00:31:14.605 START TEST fio_dif_rand_params 00:31:14.605 ************************************ 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 bdev_null0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.605 [2024-11-20 07:33:17.923762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:14.605 { 00:31:14.605 "params": { 00:31:14.605 "name": "Nvme$subsystem", 00:31:14.605 "trtype": "$TEST_TRANSPORT", 00:31:14.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.605 "adrfam": "ipv4", 00:31:14.605 "trsvcid": "$NVMF_PORT", 00:31:14.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.605 "hdgst": ${hdgst:-false}, 00:31:14.605 "ddgst": ${ddgst:-false} 00:31:14.605 }, 00:31:14.605 "method": "bdev_nvme_attach_controller" 00:31:14.605 } 00:31:14.605 EOF 00:31:14.605 )") 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.605 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:14.606 "params": { 00:31:14.606 "name": "Nvme0", 00:31:14.606 "trtype": "tcp", 00:31:14.606 "traddr": "10.0.0.2", 00:31:14.606 "adrfam": "ipv4", 00:31:14.606 "trsvcid": "4420", 00:31:14.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.606 "hdgst": false, 00:31:14.606 "ddgst": false 00:31:14.606 }, 00:31:14.606 "method": "bdev_nvme_attach_controller" 00:31:14.606 }' 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:14.606 07:33:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.864 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:14.864 ... 00:31:14.864 fio-3.35 00:31:14.864 Starting 3 threads 00:31:21.422 00:31:21.422 filename0: (groupid=0, jobs=1): err= 0: pid=2679016: Wed Nov 20 07:33:23 2024 00:31:21.422 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(141MiB/5005msec) 00:31:21.422 slat (usec): min=7, max=112, avg=13.75, stdev= 4.69 00:31:21.422 clat (usec): min=6951, max=55328, avg=13337.29, stdev=4352.94 00:31:21.422 lat (usec): min=6963, max=55340, avg=13351.04, stdev=4352.87 00:31:21.422 clat percentiles (usec): 00:31:21.422 | 1.00th=[ 8160], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:31:21.422 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:31:21.422 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16057], 95.00th=[16712], 00:31:21.422 | 99.00th=[46924], 99.50th=[50070], 99.90th=[53740], 99.95th=[55313], 00:31:21.422 | 99.99th=[55313] 00:31:21.422 bw ( KiB/s): min=24576, max=31488, per=33.62%, avg=28723.20, stdev=2550.88, samples=10 00:31:21.422 iops : min= 192, max= 246, avg=224.40, stdev=19.93, samples=10 00:31:21.422 lat (msec) : 10=5.96%, 20=92.97%, 50=0.80%, 100=0.27% 00:31:21.422 cpu : usr=92.23%, sys=7.23%, ctx=26, majf=0, minf=165 00:31:21.422 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.422 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.422 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.422 filename0: (groupid=0, jobs=1): err= 0: pid=2679017: Wed Nov 20 07:33:23 2024 00:31:21.422 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(139MiB/5005msec) 00:31:21.422 slat (nsec): min=7138, max=84544, avg=14079.81, stdev=5106.58 00:31:21.422 clat (usec): min=5412, max=52838, avg=13493.31, stdev=4686.49 00:31:21.422 lat (usec): min=5420, max=52881, avg=13507.39, stdev=4686.46 00:31:21.422 clat percentiles (usec): 00:31:21.422 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10945], 20.00th=[11600], 00:31:21.422 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:31:21.422 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15795], 95.00th=[16712], 00:31:21.422 | 99.00th=[49021], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:31:21.422 | 99.99th=[52691] 00:31:21.422 bw ( KiB/s): min=20736, max=30976, per=33.23%, avg=28390.40, stdev=2969.43, samples=10 00:31:21.422 iops : min= 162, max= 242, avg=221.80, stdev=23.20, samples=10 00:31:21.422 lat (msec) : 10=4.32%, 20=94.33%, 50=0.81%, 100=0.54% 00:31:21.422 cpu : usr=91.69%, sys=7.75%, ctx=35, majf=0, minf=96 00:31:21.422 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.422 issued rwts: total=1111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.422 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.422 filename0: (groupid=0, jobs=1): err= 0: pid=2679018: Wed Nov 20 07:33:23 2024 00:31:21.422 read: IOPS=220, BW=27.6MiB/s (29.0MB/s)(138MiB/5005msec) 00:31:21.422 slat (usec): min=7, max=113, avg=13.87, stdev= 4.64 00:31:21.422 clat (usec): min=4644, max=51304, avg=13558.04, stdev=3967.07 00:31:21.422 lat (usec): min=4657, max=51317, avg=13571.90, stdev=3967.21 00:31:21.422 clat percentiles (usec): 00:31:21.422 | 1.00th=[ 6194], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11731], 00:31:21.422 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:31:21.422 | 70.00th=[14615], 80.00th=[15401], 90.00th=[16188], 95.00th=[16909], 00:31:21.422 | 99.00th=[17957], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:31:21.422 | 99.99th=[51119] 00:31:21.422 bw ( KiB/s): min=25856, max=30464, per=33.05%, avg=28236.80, stdev=1448.41, samples=10 00:31:21.422 iops : min= 202, max= 238, avg=220.60, stdev=11.32, samples=10 00:31:21.422 lat (msec) : 10=6.06%, 20=93.13%, 50=0.27%, 100=0.54% 00:31:21.422 cpu : usr=91.65%, sys=7.79%, ctx=20, majf=0, minf=146 00:31:21.422 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.422 issued rwts: total=1106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.422 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.422 00:31:21.422 Run status group 0 (all jobs): 00:31:21.422 READ: bw=83.4MiB/s (87.5MB/s), 27.6MiB/s-28.1MiB/s (29.0MB/s-29.4MB/s), io=418MiB (438MB), run=5005-5005msec 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.422 bdev_null0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.422 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 [2024-11-20 07:33:24.172131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 bdev_null1 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 bdev_null2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.423 { 00:31:21.423 "params": { 00:31:21.423 "name": "Nvme$subsystem", 00:31:21.423 "trtype": "$TEST_TRANSPORT", 00:31:21.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.423 "adrfam": "ipv4", 00:31:21.423 "trsvcid": "$NVMF_PORT", 00:31:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.423 "hdgst": ${hdgst:-false}, 00:31:21.423 "ddgst": ${ddgst:-false} 00:31:21.423 }, 00:31:21.423 "method": "bdev_nvme_attach_controller" 00:31:21.423 } 00:31:21.423 EOF 00:31:21.423 )") 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.423 { 00:31:21.423 "params": { 00:31:21.423 "name": "Nvme$subsystem", 00:31:21.423 "trtype": "$TEST_TRANSPORT", 00:31:21.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.423 "adrfam": "ipv4", 00:31:21.423 "trsvcid": "$NVMF_PORT", 00:31:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.423 "hdgst": ${hdgst:-false}, 00:31:21.423 "ddgst": ${ddgst:-false} 00:31:21.423 }, 00:31:21.423 "method": "bdev_nvme_attach_controller" 00:31:21.423 } 00:31:21.423 EOF 00:31:21.423 )") 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.423 { 00:31:21.423 "params": { 00:31:21.423 "name": "Nvme$subsystem", 00:31:21.423 "trtype": "$TEST_TRANSPORT", 00:31:21.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.423 "adrfam": "ipv4", 00:31:21.423 "trsvcid": "$NVMF_PORT", 00:31:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.423 "hdgst": ${hdgst:-false}, 00:31:21.423 "ddgst": ${ddgst:-false} 00:31:21.423 }, 00:31:21.423 "method": "bdev_nvme_attach_controller" 00:31:21.423 } 00:31:21.423 EOF 00:31:21.423 )") 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:21.423 07:33:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.423 "params": { 00:31:21.423 "name": "Nvme0", 00:31:21.423 "trtype": "tcp", 00:31:21.423 "traddr": "10.0.0.2", 00:31:21.423 "adrfam": "ipv4", 00:31:21.423 "trsvcid": "4420", 00:31:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.423 "hdgst": false, 00:31:21.423 "ddgst": false 00:31:21.423 }, 00:31:21.423 "method": "bdev_nvme_attach_controller" 00:31:21.423 },{ 00:31:21.423 "params": { 00:31:21.423 "name": "Nvme1", 00:31:21.423 "trtype": "tcp", 00:31:21.423 "traddr": "10.0.0.2", 00:31:21.423 "adrfam": "ipv4", 00:31:21.423 "trsvcid": "4420", 00:31:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.423 "hdgst": false, 00:31:21.423 "ddgst": false 00:31:21.423 }, 00:31:21.424 "method": "bdev_nvme_attach_controller" 00:31:21.424 },{ 00:31:21.424 "params": { 00:31:21.424 "name": "Nvme2", 00:31:21.424 "trtype": "tcp", 00:31:21.424 "traddr": "10.0.0.2", 00:31:21.424 "adrfam": "ipv4", 00:31:21.424 "trsvcid": "4420", 00:31:21.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:21.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:21.424 "hdgst": false, 00:31:21.424 "ddgst": false 00:31:21.424 }, 00:31:21.424 "method": "bdev_nvme_attach_controller" 00:31:21.424 }' 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:21.424 07:33:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:21.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:21.424 ... 00:31:21.424 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:21.424 ... 00:31:21.424 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:21.424 ... 00:31:21.424 fio-3.35 00:31:21.424 Starting 24 threads 00:31:33.624 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679876: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10001msec) 00:31:33.624 slat (usec): min=12, max=112, avg=40.60, stdev=18.64 00:31:33.624 clat (usec): min=20267, max=63914, avg=33546.17, stdev=2184.32 00:31:33.624 lat (usec): min=20288, max=63982, avg=33586.78, stdev=2183.91 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:31:33.624 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.624 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.624 | 99.00th=[41157], 99.50th=[43779], 99.90th=[63701], 99.95th=[63701], 00:31:33.624 | 99.99th=[63701] 00:31:33.624 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1886.47, stdev=71.42, samples=19 00:31:33.624 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:31:33.624 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.624 cpu : usr=98.42%, sys=1.17%, ctx=18, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679877: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.6MiB/10024msec) 00:31:33.624 slat (nsec): min=5679, max=99304, avg=39737.24, stdev=12782.25 00:31:33.624 clat (usec): min=18154, max=44280, avg=33371.82, stdev=1505.24 00:31:33.624 lat (usec): min=18193, max=44338, avg=33411.56, stdev=1505.79 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[29754], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:31:33.624 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:33.624 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.624 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.624 | 99.99th=[44303] 00:31:33.624 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1893.05, stdev=53.61, samples=19 00:31:33.624 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:33.624 lat (msec) : 20=0.34%, 50=99.66% 00:31:33.624 cpu : usr=96.08%, sys=2.47%, ctx=131, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679878: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10004msec) 00:31:33.624 slat (nsec): min=10822, max=91103, avg=38139.05, stdev=13068.27 00:31:33.624 clat (usec): min=10089, max=44359, avg=33356.54, stdev=2011.03 00:31:33.624 lat (usec): min=10102, max=44378, avg=33394.68, stdev=2011.60 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[23987], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.624 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.624 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.624 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.624 | 99.99th=[44303] 00:31:33.624 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1899.79, stdev=64.19, samples=19 00:31:33.624 iops : min= 448, max= 512, avg=474.95, stdev=16.05, samples=19 00:31:33.624 lat (msec) : 20=0.67%, 50=99.33% 00:31:33.624 cpu : usr=97.96%, sys=1.40%, ctx=73, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679879: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=472, BW=1890KiB/s (1936kB/s)(18.5MiB/10021msec) 00:31:33.624 slat (nsec): min=4201, max=76845, avg=36652.04, stdev=10792.83 00:31:33.624 clat (usec): min=20488, max=49216, avg=33521.73, stdev=1572.56 00:31:33.624 lat (usec): min=20510, max=49233, avg=33558.38, stdev=1571.60 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.624 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.624 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.624 | 99.00th=[41157], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021], 00:31:33.624 | 99.99th=[49021] 00:31:33.624 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.45, stdev=56.37, samples=20 00:31:33.624 iops : min= 448, max= 480, avg=471.60, stdev=14.09, samples=20 00:31:33.624 lat (msec) : 50=100.00% 00:31:33.624 cpu : usr=96.06%, sys=2.45%, ctx=314, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679880: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10007msec) 00:31:33.624 slat (nsec): min=4103, max=76921, avg=37027.45, stdev=10494.13 00:31:33.624 clat (usec): min=32498, max=56078, avg=33606.99, stdev=1676.14 00:31:33.624 lat (usec): min=32545, max=56096, avg=33644.01, stdev=1674.12 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.624 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.624 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.624 | 99.00th=[41157], 99.50th=[43779], 99.90th=[55837], 99.95th=[55837], 00:31:33.624 | 99.99th=[55837] 00:31:33.624 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.32, stdev=57.91, samples=19 00:31:33.624 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:31:33.624 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.624 cpu : usr=96.72%, sys=2.11%, ctx=199, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679881: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:31:33.624 slat (nsec): min=6893, max=70375, avg=30686.41, stdev=11740.50 00:31:33.624 clat (usec): min=9551, max=44353, avg=33438.97, stdev=2025.83 00:31:33.624 lat (usec): min=9558, max=44374, avg=33469.66, stdev=2026.33 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[22938], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:31:33.624 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.624 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.624 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.624 | 99.99th=[44303] 00:31:33.624 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1899.79, stdev=64.19, samples=19 00:31:33.624 iops : min= 448, max= 512, avg=474.95, stdev=16.05, samples=19 00:31:33.624 lat (msec) : 10=0.34%, 20=0.34%, 50=99.33% 00:31:33.624 cpu : usr=97.72%, sys=1.45%, ctx=138, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.624 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.624 filename0: (groupid=0, jobs=1): err= 0: pid=2679882: Wed Nov 20 07:33:35 2024 00:31:33.624 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10004msec) 00:31:33.624 slat (nsec): min=9370, max=73279, avg=25320.93, stdev=11269.31 00:31:33.624 clat (usec): min=10076, max=44300, avg=33490.49, stdev=1991.43 00:31:33.624 lat (usec): min=10093, max=44318, avg=33515.81, stdev=1990.98 00:31:33.624 clat percentiles (usec): 00:31:33.624 | 1.00th=[23725], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:31:33.624 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.624 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.624 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.624 | 99.99th=[44303] 00:31:33.624 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1899.79, stdev=64.19, samples=19 00:31:33.624 iops : min= 448, max= 512, avg=474.95, stdev=16.05, samples=19 00:31:33.624 lat (msec) : 20=0.67%, 50=99.33% 00:31:33.624 cpu : usr=97.61%, sys=1.69%, ctx=86, majf=0, minf=9 00:31:33.624 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename0: (groupid=0, jobs=1): err= 0: pid=2679883: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=471, BW=1887KiB/s (1933kB/s)(18.4MiB/10003msec) 00:31:33.625 slat (nsec): min=4051, max=72855, avg=34054.87, stdev=9822.23 00:31:33.625 clat (usec): min=23873, max=76838, avg=33594.17, stdev=2129.28 00:31:33.625 lat (usec): min=23884, max=76856, avg=33628.23, stdev=2128.13 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.625 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.625 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.625 | 99.00th=[43779], 99.50th=[44303], 99.90th=[60556], 99.95th=[61080], 00:31:33.625 | 99.99th=[77071] 00:31:33.625 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1886.32, stdev=71.93, samples=19 00:31:33.625 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:31:33.625 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.625 cpu : usr=97.25%, sys=1.67%, ctx=170, majf=0, minf=10 00:31:33.625 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679884: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:31:33.625 slat (nsec): min=6301, max=70672, avg=31490.03, stdev=11465.80 00:31:33.625 clat (usec): min=15399, max=44333, avg=33481.72, stdev=1605.82 00:31:33.625 lat (usec): min=15420, max=44357, avg=33513.21, stdev=1605.58 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[29492], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.625 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.625 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.625 | 99.00th=[37487], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:31:33.625 | 99.99th=[44303] 00:31:33.625 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1893.05, stdev=53.61, samples=19 00:31:33.625 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:33.625 lat (msec) : 20=0.34%, 50=99.66% 00:31:33.625 cpu : usr=97.73%, sys=1.54%, ctx=36, majf=0, minf=9 00:31:33.625 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679885: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10004msec) 00:31:33.625 slat (nsec): min=12773, max=79496, avg=34039.93, stdev=12763.83 00:31:33.625 clat (usec): min=18350, max=44361, avg=33525.70, stdev=1351.47 00:31:33.625 lat (usec): min=18389, max=44387, avg=33559.74, stdev=1350.23 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.625 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.625 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.625 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.625 | 99.99th=[44303] 00:31:33.625 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1893.05, stdev=53.61, samples=19 00:31:33.625 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:33.625 lat (msec) : 20=0.34%, 50=99.66% 00:31:33.625 cpu : usr=98.47%, sys=1.12%, ctx=12, majf=0, minf=10 00:31:33.625 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679886: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:31:33.625 slat (nsec): min=7390, max=85549, avg=25153.49, stdev=13790.82 00:31:33.625 clat (usec): min=18196, max=44362, avg=33598.29, stdev=1360.77 00:31:33.625 lat (usec): min=18217, max=44383, avg=33623.44, stdev=1358.65 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:31:33.625 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.625 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.625 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.625 | 99.99th=[44303] 00:31:33.625 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1893.21, stdev=53.70, samples=19 00:31:33.625 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:33.625 lat (msec) : 20=0.34%, 50=99.66% 00:31:33.625 cpu : usr=97.82%, sys=1.37%, ctx=134, majf=0, minf=9 00:31:33.625 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679887: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10001msec) 00:31:33.625 slat (usec): min=25, max=132, avg=84.20, stdev= 9.76 00:31:33.625 clat (usec): min=19359, max=64181, avg=33149.51, stdev=2238.04 00:31:33.625 lat (usec): min=19424, max=64241, avg=33233.71, stdev=2237.20 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:31:33.625 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:33.625 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:33.625 | 99.00th=[40633], 99.50th=[43779], 99.90th=[64226], 99.95th=[64226], 00:31:33.625 | 99.99th=[64226] 00:31:33.625 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1886.47, stdev=71.42, samples=19 00:31:33.625 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:31:33.625 lat (msec) : 20=0.23%, 50=99.43%, 100=0.34% 00:31:33.625 cpu : usr=98.35%, sys=1.19%, ctx=9, majf=0, minf=9 00:31:33.625 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679888: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10010msec) 00:31:33.625 slat (nsec): min=7307, max=68711, avg=35833.13, stdev=10055.20 00:31:33.625 clat (usec): min=20026, max=47022, avg=33493.12, stdev=1590.79 00:31:33.625 lat (usec): min=20042, max=47042, avg=33528.96, stdev=1590.60 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.625 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.625 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.625 | 99.00th=[43779], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:31:33.625 | 99.99th=[46924] 00:31:33.625 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:31:33.625 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:31:33.625 lat (msec) : 50=100.00% 00:31:33.625 cpu : usr=98.46%, sys=1.15%, ctx=13, majf=0, minf=9 00:31:33.625 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679889: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=472, BW=1889KiB/s (1935kB/s)(18.5MiB/10026msec) 00:31:33.625 slat (nsec): min=7897, max=69373, avg=31047.59, stdev=11574.75 00:31:33.625 clat (usec): min=20087, max=49053, avg=33608.06, stdev=1688.04 00:31:33.625 lat (usec): min=20098, max=49085, avg=33639.11, stdev=1687.24 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.625 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.625 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.625 | 99.00th=[41681], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:31:33.625 | 99.99th=[49021] 00:31:33.625 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.45, stdev=56.37, samples=20 00:31:33.625 iops : min= 448, max= 480, avg=471.60, stdev=14.09, samples=20 00:31:33.625 lat (msec) : 50=100.00% 00:31:33.625 cpu : usr=98.38%, sys=1.22%, ctx=13, majf=0, minf=9 00:31:33.625 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.625 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.625 filename1: (groupid=0, jobs=1): err= 0: pid=2679890: Wed Nov 20 07:33:35 2024 00:31:33.625 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10004msec) 00:31:33.625 slat (nsec): min=10062, max=76147, avg=36307.50, stdev=10157.62 00:31:33.625 clat (usec): min=10988, max=44396, avg=33370.52, stdev=1967.93 00:31:33.625 lat (usec): min=11007, max=44418, avg=33406.83, stdev=1968.63 00:31:33.625 clat percentiles (usec): 00:31:33.625 | 1.00th=[23200], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.625 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.626 | 99.00th=[40633], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:31:33.626 | 99.99th=[44303] 00:31:33.626 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1899.79, stdev=64.19, samples=19 00:31:33.626 iops : min= 448, max= 512, avg=474.95, stdev=16.05, samples=19 00:31:33.626 lat (msec) : 20=0.67%, 50=99.33% 00:31:33.626 cpu : usr=98.49%, sys=1.12%, ctx=12, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename1: (groupid=0, jobs=1): err= 0: pid=2679891: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:31:33.626 slat (nsec): min=7378, max=71521, avg=36114.34, stdev=9992.15 00:31:33.626 clat (usec): min=20017, max=63248, avg=33528.70, stdev=1882.83 00:31:33.626 lat (usec): min=20041, max=63270, avg=33564.82, stdev=1881.93 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.626 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.626 | 99.00th=[43779], 99.50th=[44303], 99.90th=[53740], 99.95th=[53740], 00:31:33.626 | 99.99th=[63177] 00:31:33.626 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.15, stdev=56.60, samples=20 00:31:33.626 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:31:33.626 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.626 cpu : usr=98.31%, sys=1.29%, ctx=13, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename2: (groupid=0, jobs=1): err= 0: pid=2679892: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:31:33.626 slat (nsec): min=9545, max=76286, avg=37166.48, stdev=10198.43 00:31:33.626 clat (usec): min=12877, max=56615, avg=33479.87, stdev=2143.80 00:31:33.626 lat (usec): min=12900, max=56635, avg=33517.03, stdev=2143.39 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.626 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.626 | 99.00th=[40633], 99.50th=[44303], 99.90th=[56361], 99.95th=[56361], 00:31:33.626 | 99.99th=[56361] 00:31:33.626 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:31:33.626 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:31:33.626 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:33.626 cpu : usr=98.69%, sys=0.92%, ctx=13, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename2: (groupid=0, jobs=1): err= 0: pid=2679893: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10018msec) 00:31:33.626 slat (nsec): min=7200, max=73779, avg=35406.25, stdev=11018.06 00:31:33.626 clat (usec): min=20086, max=64324, avg=33548.96, stdev=1926.63 00:31:33.626 lat (usec): min=20103, max=64345, avg=33584.36, stdev=1925.46 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.626 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.626 | 99.00th=[43779], 99.50th=[44303], 99.90th=[54789], 99.95th=[54789], 00:31:33.626 | 99.99th=[64226] 00:31:33.626 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:31:33.626 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:31:33.626 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.626 cpu : usr=98.41%, sys=1.19%, ctx=12, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename2: (groupid=0, jobs=1): err= 0: pid=2679894: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10014msec) 00:31:33.626 slat (nsec): min=7179, max=72405, avg=36117.36, stdev=10361.56 00:31:33.626 clat (usec): min=20045, max=50132, avg=33502.11, stdev=1707.22 00:31:33.626 lat (usec): min=20059, max=50152, avg=33538.23, stdev=1706.77 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.626 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.626 | 99.00th=[43779], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:31:33.626 | 99.99th=[50070] 00:31:33.626 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1887.40, stdev=56.57, samples=20 00:31:33.626 iops : min= 448, max= 480, avg=471.85, stdev=14.14, samples=20 00:31:33.626 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.626 cpu : usr=98.54%, sys=1.07%, ctx=12, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename2: (groupid=0, jobs=1): err= 0: pid=2679895: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:31:33.626 slat (nsec): min=7089, max=78563, avg=37033.58, stdev=11157.66 00:31:33.626 clat (usec): min=12912, max=54919, avg=33468.27, stdev=2084.58 00:31:33.626 lat (usec): min=12921, max=54947, avg=33505.30, stdev=2084.26 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.626 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.626 | 99.00th=[40633], 99.50th=[44303], 99.90th=[54789], 99.95th=[54789], 00:31:33.626 | 99.99th=[54789] 00:31:33.626 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1886.32, stdev=57.91, samples=19 00:31:33.626 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:31:33.626 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:33.626 cpu : usr=97.43%, sys=1.73%, ctx=96, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename2: (groupid=0, jobs=1): err= 0: pid=2679896: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10002msec) 00:31:33.626 slat (nsec): min=6678, max=73459, avg=34727.32, stdev=9985.88 00:31:33.626 clat (usec): min=20275, max=79402, avg=33593.57, stdev=2380.13 00:31:33.626 lat (usec): min=20285, max=79423, avg=33628.29, stdev=2379.24 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:33.626 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.626 | 99.00th=[41157], 99.50th=[44303], 99.90th=[64750], 99.95th=[64750], 00:31:33.626 | 99.99th=[79168] 00:31:33.626 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1886.32, stdev=71.93, samples=19 00:31:33.626 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:31:33.626 lat (msec) : 50=99.66%, 100=0.34% 00:31:33.626 cpu : usr=98.38%, sys=1.23%, ctx=21, majf=0, minf=9 00:31:33.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.626 filename2: (groupid=0, jobs=1): err= 0: pid=2679897: Wed Nov 20 07:33:35 2024 00:31:33.626 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.7MiB/10014msec) 00:31:33.626 slat (nsec): min=8082, max=59934, avg=17140.39, stdev=10342.89 00:31:33.626 clat (usec): min=6955, max=44170, avg=33267.17, stdev=2878.18 00:31:33.626 lat (usec): min=6971, max=44195, avg=33284.31, stdev=2877.73 00:31:33.626 clat percentiles (usec): 00:31:33.626 | 1.00th=[16188], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:31:33.626 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:33.626 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:33.626 | 99.00th=[40633], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:31:33.626 | 99.99th=[44303] 00:31:33.626 bw ( KiB/s): min= 1792, max= 2280, per=4.21%, avg=1912.40, stdev=101.04, samples=20 00:31:33.626 iops : min= 448, max= 570, avg=478.10, stdev=25.26, samples=20 00:31:33.626 lat (msec) : 10=0.15%, 20=1.40%, 50=98.46% 00:31:33.626 cpu : usr=98.53%, sys=1.07%, ctx=12, majf=0, minf=9 00:31:33.626 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:33.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.626 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.627 issued rwts: total=4797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.627 filename2: (groupid=0, jobs=1): err= 0: pid=2679898: Wed Nov 20 07:33:35 2024 00:31:33.627 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10020msec) 00:31:33.627 slat (usec): min=11, max=143, avg=83.83, stdev=10.23 00:31:33.627 clat (usec): min=15852, max=44284, avg=32992.29, stdev=1585.71 00:31:33.627 lat (usec): min=15880, max=44385, avg=33076.12, stdev=1587.45 00:31:33.627 clat percentiles (usec): 00:31:33.627 | 1.00th=[28967], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:31:33.627 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:33.627 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:33.627 | 99.00th=[36963], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:31:33.627 | 99.99th=[44303] 00:31:33.627 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:31:33.627 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:31:33.627 lat (msec) : 20=0.34%, 50=99.66% 00:31:33.627 cpu : usr=98.46%, sys=1.09%, ctx=11, majf=0, minf=9 00:31:33.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:33.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.627 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.627 filename2: (groupid=0, jobs=1): err= 0: pid=2679899: Wed Nov 20 07:33:35 2024 00:31:33.627 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:31:33.627 slat (usec): min=6, max=124, avg=43.26, stdev=17.95 00:31:33.627 clat (usec): min=13094, max=57002, avg=33434.79, stdev=2191.43 00:31:33.627 lat (usec): min=13141, max=57023, avg=33478.06, stdev=2190.39 00:31:33.627 clat percentiles (usec): 00:31:33.627 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:33.627 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:33.627 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:31:33.627 | 99.00th=[41157], 99.50th=[44303], 99.90th=[56886], 99.95th=[56886], 00:31:33.627 | 99.99th=[56886] 00:31:33.627 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:31:33.627 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:31:33.627 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:31:33.627 cpu : usr=98.57%, sys=1.03%, ctx=32, majf=0, minf=9 00:31:33.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:33.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:33.627 00:31:33.627 Run status group 0 (all jobs): 00:31:33.627 READ: bw=44.3MiB/s (46.5MB/s), 1887KiB/s-1916KiB/s (1932kB/s-1962kB/s), io=444MiB (466MB), run=10001-10026msec 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 bdev_null0 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 [2024-11-20 07:33:36.033618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 bdev_null1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.627 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:33.628 { 00:31:33.628 "params": { 00:31:33.628 "name": "Nvme$subsystem", 00:31:33.628 "trtype": "$TEST_TRANSPORT", 00:31:33.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.628 "adrfam": "ipv4", 00:31:33.628 "trsvcid": "$NVMF_PORT", 00:31:33.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.628 "hdgst": ${hdgst:-false}, 00:31:33.628 "ddgst": ${ddgst:-false} 00:31:33.628 }, 00:31:33.628 "method": "bdev_nvme_attach_controller" 00:31:33.628 } 00:31:33.628 EOF 00:31:33.628 )") 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:33.628 { 00:31:33.628 "params": { 00:31:33.628 "name": "Nvme$subsystem", 00:31:33.628 "trtype": "$TEST_TRANSPORT", 00:31:33.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.628 "adrfam": "ipv4", 00:31:33.628 "trsvcid": "$NVMF_PORT", 00:31:33.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.628 "hdgst": ${hdgst:-false}, 00:31:33.628 "ddgst": ${ddgst:-false} 00:31:33.628 }, 00:31:33.628 "method": "bdev_nvme_attach_controller" 00:31:33.628 } 00:31:33.628 EOF 00:31:33.628 )") 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:33.628 "params": { 00:31:33.628 "name": "Nvme0", 00:31:33.628 "trtype": "tcp", 00:31:33.628 "traddr": "10.0.0.2", 00:31:33.628 "adrfam": "ipv4", 00:31:33.628 "trsvcid": "4420", 00:31:33.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:33.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:33.628 "hdgst": false, 00:31:33.628 "ddgst": false 00:31:33.628 }, 00:31:33.628 "method": "bdev_nvme_attach_controller" 00:31:33.628 },{ 00:31:33.628 "params": { 00:31:33.628 "name": "Nvme1", 00:31:33.628 "trtype": "tcp", 00:31:33.628 "traddr": "10.0.0.2", 00:31:33.628 "adrfam": "ipv4", 00:31:33.628 "trsvcid": "4420", 00:31:33.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.628 "hdgst": false, 00:31:33.628 "ddgst": false 00:31:33.628 }, 00:31:33.628 "method": "bdev_nvme_attach_controller" 00:31:33.628 }' 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:33.628 07:33:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.628 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:33.628 ... 00:31:33.628 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:33.628 ... 00:31:33.628 fio-3.35 00:31:33.628 Starting 4 threads 00:31:38.893 00:31:38.893 filename0: (groupid=0, jobs=1): err= 0: pid=2681162: Wed Nov 20 07:33:42 2024 00:31:38.893 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5002msec) 00:31:38.893 slat (nsec): min=4096, max=74978, avg=14759.51, stdev=8812.65 00:31:38.893 clat (usec): min=517, max=7710, avg=4241.67, stdev=519.14 00:31:38.893 lat (usec): min=531, max=7718, avg=4256.43, stdev=519.75 00:31:38.893 clat percentiles (usec): 00:31:38.893 | 1.00th=[ 2704], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 4015], 00:31:38.893 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:31:38.893 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4883], 00:31:38.893 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7373], 99.95th=[ 7635], 00:31:38.893 | 99.99th=[ 7701] 00:31:38.893 bw ( KiB/s): min=14592, max=15408, per=25.31%, avg=14931.56, stdev=275.36, samples=9 00:31:38.893 iops : min= 1824, max= 1926, avg=1866.44, stdev=34.42, samples=9 00:31:38.893 lat (usec) : 750=0.01%, 1000=0.05% 00:31:38.893 lat (msec) : 2=0.41%, 4=18.37%, 10=81.16% 00:31:38.893 cpu : usr=94.98%, sys=4.54%, ctx=9, majf=0, minf=133 00:31:38.893 IO depths : 1=0.4%, 2=11.8%, 4=60.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 issued rwts: total=9327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:38.893 filename0: (groupid=0, jobs=1): err= 0: pid=2681163: Wed Nov 20 07:33:42 2024 00:31:38.893 read: IOPS=1815, BW=14.2MiB/s (14.9MB/s)(71.0MiB/5002msec) 00:31:38.893 slat (nsec): min=4144, max=80809, avg=18982.31, stdev=11121.22 00:31:38.893 clat (usec): min=742, max=7850, avg=4336.35, stdev=621.06 00:31:38.893 lat (usec): min=761, max=7858, avg=4355.34, stdev=620.48 00:31:38.893 clat percentiles (usec): 00:31:38.893 | 1.00th=[ 2573], 5.00th=[ 3556], 10.00th=[ 3884], 20.00th=[ 4113], 00:31:38.893 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:31:38.893 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5407], 00:31:38.893 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 7635], 99.95th=[ 7767], 00:31:38.893 | 99.99th=[ 7832] 00:31:38.893 bw ( KiB/s): min=14352, max=14992, per=24.65%, avg=14538.22, stdev=206.25, samples=9 00:31:38.893 iops : min= 1794, max= 1874, avg=1817.22, stdev=25.80, samples=9 00:31:38.893 lat (usec) : 750=0.01%, 1000=0.08% 00:31:38.893 lat (msec) : 2=0.62%, 4=12.39%, 10=86.91% 00:31:38.893 cpu : usr=94.66%, sys=4.84%, ctx=10, majf=0, minf=100 00:31:38.893 IO depths : 1=0.2%, 2=14.5%, 4=58.1%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 issued rwts: total=9083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:38.893 filename1: (groupid=0, jobs=1): err= 0: pid=2681164: Wed Nov 20 07:33:42 2024 00:31:38.893 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5003msec) 00:31:38.893 slat (nsec): min=4147, max=81185, avg=19032.90, stdev=10933.99 00:31:38.893 clat (usec): min=654, max=7860, avg=4280.48, stdev=586.95 00:31:38.893 lat (usec): min=668, max=7874, avg=4299.51, stdev=586.77 00:31:38.893 clat percentiles (usec): 00:31:38.893 | 1.00th=[ 2343], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4080], 00:31:38.893 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:31:38.893 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5211], 00:31:38.893 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 7701], 00:31:38.893 | 99.99th=[ 7832] 00:31:38.893 bw ( KiB/s): min=14476, max=15104, per=24.93%, avg=14706.80, stdev=190.12, samples=10 00:31:38.893 iops : min= 1809, max= 1888, avg=1838.30, stdev=23.83, samples=10 00:31:38.893 lat (usec) : 750=0.01%, 1000=0.03% 00:31:38.893 lat (msec) : 2=0.72%, 4=14.66%, 10=84.58% 00:31:38.893 cpu : usr=94.82%, sys=4.68%, ctx=9, majf=0, minf=74 00:31:38.893 IO depths : 1=0.5%, 2=16.5%, 4=56.8%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 issued rwts: total=9198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:38.893 filename1: (groupid=0, jobs=1): err= 0: pid=2681165: Wed Nov 20 07:33:42 2024 00:31:38.893 read: IOPS=1856, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5004msec) 00:31:38.893 slat (usec): min=4, max=110, avg=19.44, stdev= 9.66 00:31:38.893 clat (usec): min=987, max=7496, avg=4240.49, stdev=529.60 00:31:38.893 lat (usec): min=1006, max=7518, avg=4259.93, stdev=529.86 00:31:38.893 clat percentiles (usec): 00:31:38.893 | 1.00th=[ 2540], 5.00th=[ 3490], 10.00th=[ 3752], 20.00th=[ 4047], 00:31:38.893 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:31:38.893 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5014], 00:31:38.893 | 99.00th=[ 6128], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7308], 00:31:38.893 | 99.99th=[ 7504] 00:31:38.893 bw ( KiB/s): min=14512, max=15264, per=25.17%, avg=14849.60, stdev=198.91, samples=10 00:31:38.893 iops : min= 1814, max= 1908, avg=1856.20, stdev=24.86, samples=10 00:31:38.893 lat (usec) : 1000=0.01% 00:31:38.893 lat (msec) : 2=0.53%, 4=17.89%, 10=81.57% 00:31:38.893 cpu : usr=95.14%, sys=4.36%, ctx=13, majf=0, minf=187 00:31:38.893 IO depths : 1=0.4%, 2=17.2%, 4=55.8%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.893 issued rwts: total=9289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:38.893 00:31:38.893 Run status group 0 (all jobs): 00:31:38.893 READ: bw=57.6MiB/s (60.4MB/s), 14.2MiB/s-14.6MiB/s (14.9MB/s-15.3MB/s), io=288MiB (302MB), run=5002-5004msec 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.151 00:31:39.151 real 0m24.632s 00:31:39.151 user 4m33.174s 00:31:39.151 sys 0m6.471s 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 ************************************ 00:31:39.151 END TEST fio_dif_rand_params 00:31:39.151 ************************************ 00:31:39.151 07:33:42 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:39.151 07:33:42 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:39.151 07:33:42 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 ************************************ 00:31:39.151 START TEST fio_dif_digest 00:31:39.151 ************************************ 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.151 bdev_null0 00:31:39.151 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.152 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:39.152 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.152 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.410 [2024-11-20 07:33:42.597173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:39.410 { 00:31:39.410 "params": { 00:31:39.410 "name": "Nvme$subsystem", 00:31:39.410 "trtype": "$TEST_TRANSPORT", 00:31:39.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.410 "adrfam": "ipv4", 00:31:39.410 "trsvcid": "$NVMF_PORT", 00:31:39.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.410 "hdgst": ${hdgst:-false}, 00:31:39.410 "ddgst": ${ddgst:-false} 00:31:39.410 }, 00:31:39.410 "method": "bdev_nvme_attach_controller" 00:31:39.410 } 00:31:39.410 EOF 00:31:39.410 )") 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:39.410 "params": { 00:31:39.410 "name": "Nvme0", 00:31:39.410 "trtype": "tcp", 00:31:39.410 "traddr": "10.0.0.2", 00:31:39.410 "adrfam": "ipv4", 00:31:39.410 "trsvcid": "4420", 00:31:39.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:39.410 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:39.410 "hdgst": true, 00:31:39.410 "ddgst": true 00:31:39.410 }, 00:31:39.410 "method": "bdev_nvme_attach_controller" 00:31:39.410 }' 00:31:39.410 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:39.411 07:33:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.668 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:39.668 ... 00:31:39.669 fio-3.35 00:31:39.669 Starting 3 threads 00:31:51.867 00:31:51.867 filename0: (groupid=0, jobs=1): err= 0: pid=2682032: Wed Nov 20 07:33:53 2024 00:31:51.867 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(249MiB/10009msec) 00:31:51.867 slat (nsec): min=7775, max=90405, avg=15016.29, stdev=4837.69 00:31:51.867 clat (usec): min=8401, max=95588, avg=15070.36, stdev=3260.52 00:31:51.867 lat (usec): min=8414, max=95609, avg=15085.37, stdev=3260.67 00:31:51.867 clat percentiles (usec): 00:31:51.867 | 1.00th=[12125], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:31:51.867 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:31:51.867 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:31:51.867 | 99.00th=[17957], 99.50th=[20317], 99.90th=[58983], 99.95th=[95945], 00:31:51.867 | 99.99th=[95945] 00:31:51.867 bw ( KiB/s): min=20992, max=26624, per=33.96%, avg=25433.60, stdev=1184.19, samples=20 00:31:51.867 iops : min= 164, max= 208, avg=198.70, stdev= 9.25, samples=20 00:31:51.867 lat (msec) : 10=0.15%, 20=99.30%, 50=0.15%, 100=0.40% 00:31:51.867 cpu : usr=93.35%, sys=6.14%, ctx=16, majf=0, minf=205 00:31:51.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.867 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:51.867 filename0: (groupid=0, jobs=1): err= 0: pid=2682033: Wed Nov 20 07:33:53 2024 00:31:51.867 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(243MiB/10047msec) 00:31:51.867 slat (nsec): min=7376, max=60019, avg=15011.16, stdev=4487.87 00:31:51.867 clat (usec): min=9031, max=49556, avg=15489.16, stdev=1571.00 00:31:51.867 lat (usec): min=9044, max=49571, avg=15504.17, stdev=1570.97 00:31:51.867 clat percentiles (usec): 00:31:51.867 | 1.00th=[11207], 5.00th=[13698], 10.00th=[14222], 20.00th=[14746], 00:31:51.867 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15664], 00:31:51.867 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 00:31:51.867 | 99.00th=[17957], 99.50th=[18482], 99.90th=[46924], 99.95th=[49546], 00:31:51.867 | 99.99th=[49546] 00:31:51.867 bw ( KiB/s): min=24064, max=26624, per=33.12%, avg=24806.40, stdev=653.47, samples=20 00:31:51.867 iops : min= 188, max= 208, avg=193.80, stdev= 5.11, samples=20 00:31:51.867 lat (msec) : 10=0.10%, 20=99.69%, 50=0.21% 00:31:51.867 cpu : usr=93.58%, sys=5.90%, ctx=26, majf=0, minf=129 00:31:51.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.867 issued rwts: total=1941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:51.867 filename0: (groupid=0, jobs=1): err= 0: pid=2682034: Wed Nov 20 07:33:53 2024 00:31:51.867 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10045msec) 00:31:51.867 slat (nsec): min=8098, max=50437, avg=16595.37, stdev=5199.96 00:31:51.867 clat (usec): min=9203, max=58052, avg=15435.48, stdev=2376.52 00:31:51.867 lat (usec): min=9223, max=58066, avg=15452.07, stdev=2376.25 00:31:51.867 clat percentiles (usec): 00:31:51.867 | 1.00th=[10683], 5.00th=[13566], 10.00th=[14091], 20.00th=[14484], 00:31:51.867 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:31:51.867 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:31:51.867 | 99.00th=[17957], 99.50th=[18744], 99.90th=[57410], 99.95th=[57934], 00:31:51.867 | 99.99th=[57934] 00:31:51.867 bw ( KiB/s): min=23040, max=27392, per=33.25%, avg=24898.40, stdev=806.48, samples=20 00:31:51.867 iops : min= 180, max= 214, avg=194.50, stdev= 6.32, samples=20 00:31:51.867 lat (msec) : 10=0.26%, 20=99.33%, 50=0.15%, 100=0.26% 00:31:51.867 cpu : usr=93.74%, sys=5.73%, ctx=29, majf=0, minf=145 00:31:51.867 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:51.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.867 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.867 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:51.867 00:31:51.867 Run status group 0 (all jobs): 00:31:51.867 READ: bw=73.1MiB/s (76.7MB/s), 24.1MiB/s-24.9MiB/s (25.3MB/s-26.1MB/s), io=735MiB (770MB), run=10009-10047msec 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.867 00:31:51.867 real 0m11.293s 00:31:51.867 user 0m29.407s 00:31:51.867 sys 0m2.067s 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:51.867 07:33:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:51.867 ************************************ 00:31:51.867 END TEST fio_dif_digest 00:31:51.867 ************************************ 00:31:51.867 07:33:53 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:51.867 07:33:53 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:51.867 07:33:53 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.867 07:33:53 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:51.867 07:33:53 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.867 07:33:53 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:51.867 07:33:53 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.867 07:33:53 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.867 rmmod nvme_tcp 00:31:51.867 rmmod nvme_fabrics 00:31:51.868 rmmod nvme_keyring 00:31:51.868 07:33:53 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.868 07:33:53 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:51.868 07:33:53 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:51.868 07:33:53 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2675244 ']' 00:31:51.868 07:33:53 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2675244 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 2675244 ']' 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 2675244 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2675244 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2675244' 00:31:51.868 killing process with pid 2675244 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@971 -- # kill 2675244 00:31:51.868 07:33:53 nvmf_dif -- common/autotest_common.sh@976 -- # wait 2675244 00:31:51.868 07:33:54 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:51.868 07:33:54 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:52.126 Waiting for block devices as requested 00:31:52.126 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.126 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.385 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.385 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.385 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.385 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:52.643 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:52.643 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:52.643 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:52.902 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.902 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.902 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:53.160 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:53.161 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:53.161 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:53.161 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.419 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.419 07:33:56 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.419 07:33:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.419 07:33:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.398 07:33:58 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.398 00:31:55.398 real 1m7.738s 00:31:55.398 user 6m31.208s 00:31:55.398 sys 0m17.793s 00:31:55.398 07:33:58 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:55.398 07:33:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.398 ************************************ 00:31:55.398 END TEST nvmf_dif 00:31:55.398 ************************************ 00:31:55.656 07:33:58 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.656 07:33:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:55.656 07:33:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:55.656 07:33:58 -- common/autotest_common.sh@10 -- # set +x 00:31:55.656 ************************************ 00:31:55.656 START TEST nvmf_abort_qd_sizes 00:31:55.656 ************************************ 00:31:55.656 07:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.656 * Looking for test storage... 00:31:55.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.656 07:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:55.656 07:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:31:55.656 07:33:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:55.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.656 --rc genhtml_branch_coverage=1 00:31:55.656 --rc genhtml_function_coverage=1 00:31:55.656 --rc genhtml_legend=1 00:31:55.656 --rc geninfo_all_blocks=1 00:31:55.656 --rc geninfo_unexecuted_blocks=1 00:31:55.656 00:31:55.656 ' 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:55.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.656 --rc genhtml_branch_coverage=1 00:31:55.656 --rc genhtml_function_coverage=1 00:31:55.656 --rc genhtml_legend=1 00:31:55.656 --rc geninfo_all_blocks=1 00:31:55.656 --rc geninfo_unexecuted_blocks=1 00:31:55.656 00:31:55.656 ' 00:31:55.656 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:55.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.657 --rc genhtml_branch_coverage=1 00:31:55.657 --rc genhtml_function_coverage=1 00:31:55.657 --rc genhtml_legend=1 00:31:55.657 --rc geninfo_all_blocks=1 00:31:55.657 --rc geninfo_unexecuted_blocks=1 00:31:55.657 00:31:55.657 ' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:55.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.657 --rc genhtml_branch_coverage=1 00:31:55.657 --rc genhtml_function_coverage=1 00:31:55.657 --rc genhtml_legend=1 00:31:55.657 --rc geninfo_all_blocks=1 00:31:55.657 --rc geninfo_unexecuted_blocks=1 00:31:55.657 00:31:55.657 ' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:55.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.657 07:33:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:58.191 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:58.191 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.191 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:58.192 Found net devices under 0000:09:00.0: cvl_0_0 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:58.192 Found net devices under 0000:09:00.1: cvl_0_1 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:31:58.192 00:31:58.192 --- 10.0.0.2 ping statistics --- 00:31:58.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.192 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:31:58.192 00:31:58.192 --- 10.0.0.1 ping statistics --- 00:31:58.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.192 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:58.192 07:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:59.126 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.126 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:59.126 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:59.126 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:59.385 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:59.385 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:59.385 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.385 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.385 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.385 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:00.322 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2686844 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2686844 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 2686844 ']' 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:00.322 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.322 [2024-11-20 07:34:03.737441] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:32:00.322 [2024-11-20 07:34:03.737516] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.581 [2024-11-20 07:34:03.809454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.581 [2024-11-20 07:34:03.868509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.581 [2024-11-20 07:34:03.868560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.581 [2024-11-20 07:34:03.868573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.581 [2024-11-20 07:34:03.868584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.581 [2024-11-20 07:34:03.868593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.581 [2024-11-20 07:34:03.870039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.581 [2024-11-20 07:34:03.870096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.581 [2024-11-20 07:34:03.870162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.581 [2024-11-20 07:34:03.870165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.581 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:00.581 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:32:00.581 07:34:03 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.581 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:00.581 07:34:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.839 07:34:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.839 ************************************ 00:32:00.839 START TEST spdk_target_abort 00:32:00.839 ************************************ 00:32:00.839 07:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:32:00.839 07:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:00.839 07:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:32:00.839 07:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.839 07:34:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 spdk_targetn1 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 [2024-11-20 07:34:06.894035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 [2024-11-20 07:34:06.938379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:04.121 07:34:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:07.399 Initializing NVMe Controllers 00:32:07.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:07.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:07.399 Initialization complete. Launching workers. 00:32:07.399 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12653, failed: 0 00:32:07.399 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1127, failed to submit 11526 00:32:07.399 success 720, unsuccessful 407, failed 0 00:32:07.399 07:34:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:07.399 07:34:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:10.677 Initializing NVMe Controllers 00:32:10.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:10.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:10.677 Initialization complete. Launching workers. 00:32:10.677 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8830, failed: 0 00:32:10.677 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 7618 00:32:10.677 success 340, unsuccessful 872, failed 0 00:32:10.677 07:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.677 07:34:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:13.955 Initializing NVMe Controllers 00:32:13.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:13.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:13.955 Initialization complete. Launching workers. 00:32:13.955 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31163, failed: 0 00:32:13.955 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2609, failed to submit 28554 00:32:13.955 success 486, unsuccessful 2123, failed 0 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.955 07:34:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2686844 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 2686844 ']' 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 2686844 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2686844 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2686844' 00:32:14.887 killing process with pid 2686844 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 2686844 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 2686844 00:32:14.887 00:32:14.887 real 0m14.232s 00:32:14.887 user 0m53.419s 00:32:14.887 sys 0m2.900s 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:14.887 07:34:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.887 ************************************ 00:32:14.887 END TEST spdk_target_abort 00:32:14.887 ************************************ 00:32:14.887 07:34:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:14.887 07:34:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:14.887 07:34:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:14.887 07:34:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:15.146 ************************************ 00:32:15.146 START TEST kernel_target_abort 00:32:15.146 ************************************ 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:15.146 07:34:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:16.080 Waiting for block devices as requested 00:32:16.080 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.338 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.338 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.338 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.338 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.597 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:16.597 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:16.597 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:16.597 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:16.855 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.855 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:17.113 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:17.113 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:17.113 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:17.113 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.371 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.371 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:17.371 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:17.630 No valid GPT data, bailing 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:17.630 00:32:17.630 Discovery Log Number of Records 2, Generation counter 2 00:32:17.630 =====Discovery Log Entry 0====== 00:32:17.630 trtype: tcp 00:32:17.630 adrfam: ipv4 00:32:17.630 subtype: current discovery subsystem 00:32:17.630 treq: not specified, sq flow control disable supported 00:32:17.630 portid: 1 00:32:17.630 trsvcid: 4420 00:32:17.630 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:17.630 traddr: 10.0.0.1 00:32:17.630 eflags: none 00:32:17.630 sectype: none 00:32:17.630 =====Discovery Log Entry 1====== 00:32:17.630 trtype: tcp 00:32:17.630 adrfam: ipv4 00:32:17.630 subtype: nvme subsystem 00:32:17.630 treq: not specified, sq flow control disable supported 00:32:17.630 portid: 1 00:32:17.630 trsvcid: 4420 00:32:17.630 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:17.630 traddr: 10.0.0.1 00:32:17.630 eflags: none 00:32:17.630 sectype: none 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.630 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.631 07:34:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:20.915 Initializing NVMe Controllers 00:32:20.915 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:20.915 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:20.915 Initialization complete. Launching workers. 00:32:20.915 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48134, failed: 0 00:32:20.915 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48134, failed to submit 0 00:32:20.915 success 0, unsuccessful 48134, failed 0 00:32:20.915 07:34:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:20.915 07:34:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:24.199 Initializing NVMe Controllers 00:32:24.199 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:24.199 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:24.199 Initialization complete. Launching workers. 00:32:24.199 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95065, failed: 0 00:32:24.199 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21394, failed to submit 73671 00:32:24.199 success 0, unsuccessful 21394, failed 0 00:32:24.199 07:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:24.199 07:34:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:27.483 Initializing NVMe Controllers 00:32:27.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:27.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:27.483 Initialization complete. Launching workers. 00:32:27.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87950, failed: 0 00:32:27.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21954, failed to submit 65996 00:32:27.483 success 0, unsuccessful 21954, failed 0 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:27.483 07:34:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:28.421 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:28.421 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:28.421 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:28.679 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:29.614 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:29.614 00:32:29.614 real 0m14.562s 00:32:29.614 user 0m6.242s 00:32:29.614 sys 0m3.576s 00:32:29.614 07:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:29.614 07:34:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:29.614 ************************************ 00:32:29.614 END TEST kernel_target_abort 00:32:29.614 ************************************ 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.614 rmmod nvme_tcp 00:32:29.614 rmmod nvme_fabrics 00:32:29.614 rmmod nvme_keyring 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2686844 ']' 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2686844 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 2686844 ']' 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 2686844 00:32:29.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2686844) - No such process 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 2686844 is not found' 00:32:29.614 Process with pid 2686844 is not found 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:29.614 07:34:32 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:30.992 Waiting for block devices as requested 00:32:30.992 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:30.992 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:30.992 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:31.252 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:31.252 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:31.252 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:31.252 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:31.512 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:31.512 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:31.512 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:31.771 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:31.771 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:31.771 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:32.030 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:32.030 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:32.030 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:32.030 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:32.289 07:34:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.193 07:34:37 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.193 00:32:34.193 real 0m38.713s 00:32:34.193 user 1m2.004s 00:32:34.193 sys 0m10.111s 00:32:34.193 07:34:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:34.193 07:34:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:34.193 ************************************ 00:32:34.193 END TEST nvmf_abort_qd_sizes 00:32:34.193 ************************************ 00:32:34.193 07:34:37 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:34.193 07:34:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:34.193 07:34:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:34.193 07:34:37 -- common/autotest_common.sh@10 -- # set +x 00:32:34.453 ************************************ 00:32:34.453 START TEST keyring_file 00:32:34.453 ************************************ 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:34.453 * Looking for test storage... 00:32:34.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:34.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.453 --rc genhtml_branch_coverage=1 00:32:34.453 --rc genhtml_function_coverage=1 00:32:34.453 --rc genhtml_legend=1 00:32:34.453 --rc geninfo_all_blocks=1 00:32:34.453 --rc geninfo_unexecuted_blocks=1 00:32:34.453 00:32:34.453 ' 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:34.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.453 --rc genhtml_branch_coverage=1 00:32:34.453 --rc genhtml_function_coverage=1 00:32:34.453 --rc genhtml_legend=1 00:32:34.453 --rc geninfo_all_blocks=1 00:32:34.453 --rc geninfo_unexecuted_blocks=1 00:32:34.453 00:32:34.453 ' 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:34.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.453 --rc genhtml_branch_coverage=1 00:32:34.453 --rc genhtml_function_coverage=1 00:32:34.453 --rc genhtml_legend=1 00:32:34.453 --rc geninfo_all_blocks=1 00:32:34.453 --rc geninfo_unexecuted_blocks=1 00:32:34.453 00:32:34.453 ' 00:32:34.453 07:34:37 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:34.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.453 --rc genhtml_branch_coverage=1 00:32:34.453 --rc genhtml_function_coverage=1 00:32:34.453 --rc genhtml_legend=1 00:32:34.453 --rc geninfo_all_blocks=1 00:32:34.453 --rc geninfo_unexecuted_blocks=1 00:32:34.453 00:32:34.453 ' 00:32:34.453 07:34:37 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:34.453 07:34:37 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.453 07:34:37 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.453 07:34:37 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.453 07:34:37 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.454 07:34:37 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.454 07:34:37 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.454 07:34:37 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:34.454 07:34:37 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:34.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1mw5I7yxlB 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1mw5I7yxlB 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1mw5I7yxlB 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1mw5I7yxlB 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.q9ZvTZrlhE 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:34.454 07:34:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.q9ZvTZrlhE 00:32:34.454 07:34:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.q9ZvTZrlhE 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.q9ZvTZrlhE 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@30 -- # tgtpid=2692623 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:34.454 07:34:37 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2692623 00:32:34.454 07:34:37 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2692623 ']' 00:32:34.454 07:34:37 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.454 07:34:37 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:34.454 07:34:37 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.454 07:34:37 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:34.454 07:34:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:34.712 [2024-11-20 07:34:37.928667] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:32:34.712 [2024-11-20 07:34:37.928757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692623 ] 00:32:34.712 [2024-11-20 07:34:37.991962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.712 [2024-11-20 07:34:38.048780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:32:34.971 07:34:38 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:34.971 [2024-11-20 07:34:38.300034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.971 null0 00:32:34.971 [2024-11-20 07:34:38.332091] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:34.971 [2024-11-20 07:34:38.332561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.971 07:34:38 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.971 07:34:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:34.971 [2024-11-20 07:34:38.356134] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:34.971 request: 00:32:34.971 { 00:32:34.972 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.972 "secure_channel": false, 00:32:34.972 "listen_address": { 00:32:34.972 "trtype": "tcp", 00:32:34.972 "traddr": "127.0.0.1", 00:32:34.972 "trsvcid": "4420" 00:32:34.972 }, 00:32:34.972 "method": "nvmf_subsystem_add_listener", 00:32:34.972 "req_id": 1 00:32:34.972 } 00:32:34.972 Got JSON-RPC error response 00:32:34.972 response: 00:32:34.972 { 00:32:34.972 "code": -32602, 00:32:34.972 "message": "Invalid parameters" 00:32:34.972 } 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:34.972 07:34:38 keyring_file -- keyring/file.sh@47 -- # bperfpid=2692637 00:32:34.972 07:34:38 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:34.972 07:34:38 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2692637 /var/tmp/bperf.sock 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2692637 ']' 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:34.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:34.972 07:34:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:35.230 [2024-11-20 07:34:38.404970] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:32:35.230 [2024-11-20 07:34:38.405030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692637 ] 00:32:35.230 [2024-11-20 07:34:38.469151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.230 [2024-11-20 07:34:38.529643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.230 07:34:38 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:35.230 07:34:38 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:32:35.230 07:34:38 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:35.230 07:34:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:35.488 07:34:38 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q9ZvTZrlhE 00:32:35.488 07:34:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q9ZvTZrlhE 00:32:35.801 07:34:39 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:35.801 07:34:39 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:35.801 07:34:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.801 07:34:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.801 07:34:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.105 07:34:39 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1mw5I7yxlB == \/\t\m\p\/\t\m\p\.\1\m\w\5\I\7\y\x\l\B ]] 00:32:36.105 07:34:39 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:36.105 07:34:39 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:36.105 07:34:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.105 07:34:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:36.105 07:34:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.364 07:34:39 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.q9ZvTZrlhE == \/\t\m\p\/\t\m\p\.\q\9\Z\v\T\Z\r\l\h\E ]] 00:32:36.364 07:34:39 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:36.364 07:34:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.364 07:34:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.364 07:34:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.364 07:34:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.364 07:34:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.622 07:34:40 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:36.622 07:34:40 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:36.622 07:34:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:36.622 07:34:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.622 07:34:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.622 07:34:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.622 07:34:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:36.880 07:34:40 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:36.880 07:34:40 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.880 07:34:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:37.138 [2024-11-20 07:34:40.530672] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:37.396 nvme0n1 00:32:37.396 07:34:40 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:37.396 07:34:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:37.396 07:34:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.396 07:34:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.396 07:34:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.396 07:34:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.653 07:34:40 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:37.653 07:34:40 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:37.653 07:34:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:37.653 07:34:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.653 07:34:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.653 07:34:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.653 07:34:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:37.911 07:34:41 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:37.911 07:34:41 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:37.911 Running I/O for 1 seconds... 00:32:39.105 10241.00 IOPS, 40.00 MiB/s 00:32:39.105 Latency(us) 00:32:39.105 [2024-11-20T06:34:42.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.105 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:39.105 nvme0n1 : 1.01 10282.19 40.16 0.00 0.00 12401.05 5995.33 25049.32 00:32:39.105 [2024-11-20T06:34:42.538Z] =================================================================================================================== 00:32:39.105 [2024-11-20T06:34:42.538Z] Total : 10282.19 40.16 0.00 0.00 12401.05 5995.33 25049.32 00:32:39.105 { 00:32:39.105 "results": [ 00:32:39.105 { 00:32:39.105 "job": "nvme0n1", 00:32:39.105 "core_mask": "0x2", 00:32:39.105 "workload": "randrw", 00:32:39.105 "percentage": 50, 00:32:39.105 "status": "finished", 00:32:39.105 "queue_depth": 128, 00:32:39.105 "io_size": 4096, 00:32:39.105 "runtime": 1.008637, 00:32:39.105 "iops": 10282.19270163597, 00:32:39.105 "mibps": 40.164815240765506, 00:32:39.105 "io_failed": 0, 00:32:39.105 "io_timeout": 0, 00:32:39.105 "avg_latency_us": 12401.052683515643, 00:32:39.105 "min_latency_us": 5995.3303703703705, 00:32:39.105 "max_latency_us": 25049.315555555557 00:32:39.105 } 00:32:39.105 ], 00:32:39.105 "core_count": 1 00:32:39.105 } 00:32:39.105 07:34:42 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:39.105 07:34:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:39.364 07:34:42 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:39.364 07:34:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:39.364 07:34:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.364 07:34:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.364 07:34:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.364 07:34:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:39.622 07:34:42 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:39.622 07:34:42 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:39.622 07:34:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:39.622 07:34:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.622 07:34:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.622 07:34:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.622 07:34:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:39.880 07:34:43 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:39.880 07:34:43 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.880 07:34:43 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:39.880 07:34:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:40.138 [2024-11-20 07:34:43.370497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:40.138 [2024-11-20 07:34:43.370647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe424d0 (107): Transport endpoint is not connected 00:32:40.138 [2024-11-20 07:34:43.371640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe424d0 (9): Bad file descriptor 00:32:40.138 [2024-11-20 07:34:43.372640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:40.138 [2024-11-20 07:34:43.372674] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:40.138 [2024-11-20 07:34:43.372686] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:40.138 [2024-11-20 07:34:43.372701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:40.138 request: 00:32:40.138 { 00:32:40.138 "name": "nvme0", 00:32:40.138 "trtype": "tcp", 00:32:40.138 "traddr": "127.0.0.1", 00:32:40.138 "adrfam": "ipv4", 00:32:40.138 "trsvcid": "4420", 00:32:40.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:40.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:40.138 "prchk_reftag": false, 00:32:40.138 "prchk_guard": false, 00:32:40.138 "hdgst": false, 00:32:40.138 "ddgst": false, 00:32:40.138 "psk": "key1", 00:32:40.138 "allow_unrecognized_csi": false, 00:32:40.138 "method": "bdev_nvme_attach_controller", 00:32:40.138 "req_id": 1 00:32:40.138 } 00:32:40.138 Got JSON-RPC error response 00:32:40.138 response: 00:32:40.138 { 00:32:40.138 "code": -5, 00:32:40.138 "message": "Input/output error" 00:32:40.138 } 00:32:40.138 07:34:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:40.138 07:34:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:40.138 07:34:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:40.138 07:34:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:40.138 07:34:43 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:40.139 07:34:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:40.139 07:34:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.139 07:34:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.139 07:34:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.139 07:34:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:40.397 07:34:43 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:40.397 07:34:43 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:40.397 07:34:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:40.397 07:34:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.397 07:34:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.397 07:34:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:40.397 07:34:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.655 07:34:43 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:40.655 07:34:43 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:40.655 07:34:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:40.913 07:34:44 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:40.913 07:34:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:41.170 07:34:44 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:41.170 07:34:44 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:41.170 07:34:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.429 07:34:44 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:41.429 07:34:44 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.1mw5I7yxlB 00:32:41.429 07:34:44 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:41.429 07:34:44 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:41.429 07:34:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:41.686 [2024-11-20 07:34:45.004666] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1mw5I7yxlB': 0100660 00:32:41.686 [2024-11-20 07:34:45.004699] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:41.686 request: 00:32:41.686 { 00:32:41.686 "name": "key0", 00:32:41.686 "path": "/tmp/tmp.1mw5I7yxlB", 00:32:41.686 "method": "keyring_file_add_key", 00:32:41.686 "req_id": 1 00:32:41.686 } 00:32:41.686 Got JSON-RPC error response 00:32:41.686 response: 00:32:41.686 { 00:32:41.686 "code": -1, 00:32:41.686 "message": "Operation not permitted" 00:32:41.686 } 00:32:41.686 07:34:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:41.686 07:34:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:41.686 07:34:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:41.686 07:34:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:41.686 07:34:45 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.1mw5I7yxlB 00:32:41.686 07:34:45 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:41.686 07:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1mw5I7yxlB 00:32:41.943 07:34:45 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.1mw5I7yxlB 00:32:41.943 07:34:45 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:41.943 07:34:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:41.943 07:34:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:41.943 07:34:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.943 07:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.943 07:34:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.201 07:34:45 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:42.201 07:34:45 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.201 07:34:45 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.201 07:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.458 [2024-11-20 07:34:45.834922] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1mw5I7yxlB': No such file or directory 00:32:42.458 [2024-11-20 07:34:45.834951] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:42.458 [2024-11-20 07:34:45.834988] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:42.458 [2024-11-20 07:34:45.835001] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:42.458 [2024-11-20 07:34:45.835021] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:42.458 [2024-11-20 07:34:45.835032] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:42.458 request: 00:32:42.458 { 00:32:42.458 "name": "nvme0", 00:32:42.458 "trtype": "tcp", 00:32:42.458 "traddr": "127.0.0.1", 00:32:42.458 "adrfam": "ipv4", 00:32:42.458 "trsvcid": "4420", 00:32:42.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:42.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:42.458 "prchk_reftag": false, 00:32:42.458 "prchk_guard": false, 00:32:42.458 "hdgst": false, 00:32:42.458 "ddgst": false, 00:32:42.458 "psk": "key0", 00:32:42.458 "allow_unrecognized_csi": false, 00:32:42.458 "method": "bdev_nvme_attach_controller", 00:32:42.458 "req_id": 1 00:32:42.459 } 00:32:42.459 Got JSON-RPC error response 00:32:42.459 response: 00:32:42.459 { 00:32:42.459 "code": -19, 00:32:42.459 "message": "No such device" 00:32:42.459 } 00:32:42.459 07:34:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:42.459 07:34:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.459 07:34:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.459 07:34:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.459 07:34:45 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:42.459 07:34:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:42.716 07:34:46 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CTCBxI3fzx 00:32:42.716 07:34:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:42.716 07:34:46 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:42.716 07:34:46 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:42.716 07:34:46 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:42.716 07:34:46 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:42.716 07:34:46 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:42.716 07:34:46 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:42.975 07:34:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CTCBxI3fzx 00:32:42.975 07:34:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CTCBxI3fzx 00:32:42.975 07:34:46 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CTCBxI3fzx 00:32:42.975 07:34:46 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CTCBxI3fzx 00:32:42.975 07:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CTCBxI3fzx 00:32:43.233 07:34:46 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:43.233 07:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:43.491 nvme0n1 00:32:43.491 07:34:46 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:43.491 07:34:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:43.491 07:34:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:43.491 07:34:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:43.491 07:34:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.491 07:34:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:43.748 07:34:47 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:43.748 07:34:47 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:43.748 07:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:44.006 07:34:47 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:44.006 07:34:47 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:44.006 07:34:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.006 07:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.006 07:34:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.264 07:34:47 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:44.264 07:34:47 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:44.264 07:34:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:44.264 07:34:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:44.264 07:34:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.264 07:34:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.264 07:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.521 07:34:47 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:44.521 07:34:47 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:44.521 07:34:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:44.779 07:34:48 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:44.779 07:34:48 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:44.779 07:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.344 07:34:48 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:45.344 07:34:48 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CTCBxI3fzx 00:32:45.345 07:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CTCBxI3fzx 00:32:45.345 07:34:48 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q9ZvTZrlhE 00:32:45.345 07:34:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q9ZvTZrlhE 00:32:45.602 07:34:49 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.602 07:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:46.167 nvme0n1 00:32:46.167 07:34:49 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:46.167 07:34:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:46.426 07:34:49 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:46.426 "subsystems": [ 00:32:46.426 { 00:32:46.426 "subsystem": "keyring", 00:32:46.426 "config": [ 00:32:46.426 { 00:32:46.426 "method": "keyring_file_add_key", 00:32:46.426 "params": { 00:32:46.426 "name": "key0", 00:32:46.426 "path": "/tmp/tmp.CTCBxI3fzx" 00:32:46.426 } 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "method": "keyring_file_add_key", 00:32:46.426 "params": { 00:32:46.426 "name": "key1", 00:32:46.426 "path": "/tmp/tmp.q9ZvTZrlhE" 00:32:46.426 } 00:32:46.426 } 00:32:46.426 ] 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "subsystem": "iobuf", 00:32:46.426 "config": [ 00:32:46.426 { 00:32:46.426 "method": "iobuf_set_options", 00:32:46.426 "params": { 00:32:46.426 "small_pool_count": 8192, 00:32:46.426 "large_pool_count": 1024, 00:32:46.426 "small_bufsize": 8192, 00:32:46.426 "large_bufsize": 135168, 00:32:46.426 "enable_numa": false 00:32:46.426 } 00:32:46.426 } 00:32:46.426 ] 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "subsystem": "sock", 00:32:46.426 "config": [ 00:32:46.426 { 00:32:46.426 "method": "sock_set_default_impl", 00:32:46.426 "params": { 00:32:46.426 "impl_name": "posix" 00:32:46.426 } 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "method": "sock_impl_set_options", 00:32:46.426 "params": { 00:32:46.426 "impl_name": "ssl", 00:32:46.426 "recv_buf_size": 4096, 00:32:46.426 "send_buf_size": 4096, 00:32:46.426 "enable_recv_pipe": true, 00:32:46.426 "enable_quickack": false, 00:32:46.426 "enable_placement_id": 0, 00:32:46.426 "enable_zerocopy_send_server": true, 00:32:46.426 "enable_zerocopy_send_client": false, 00:32:46.426 "zerocopy_threshold": 0, 00:32:46.426 "tls_version": 0, 00:32:46.426 "enable_ktls": false 00:32:46.426 } 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "method": "sock_impl_set_options", 00:32:46.426 "params": { 00:32:46.426 "impl_name": "posix", 00:32:46.426 "recv_buf_size": 2097152, 00:32:46.426 "send_buf_size": 2097152, 00:32:46.426 "enable_recv_pipe": true, 00:32:46.426 "enable_quickack": false, 00:32:46.426 "enable_placement_id": 0, 00:32:46.426 "enable_zerocopy_send_server": true, 00:32:46.426 "enable_zerocopy_send_client": false, 00:32:46.426 "zerocopy_threshold": 0, 00:32:46.426 "tls_version": 0, 00:32:46.426 "enable_ktls": false 00:32:46.426 } 00:32:46.426 } 00:32:46.426 ] 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "subsystem": "vmd", 00:32:46.426 "config": [] 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "subsystem": "accel", 00:32:46.426 "config": [ 00:32:46.426 { 00:32:46.426 "method": "accel_set_options", 00:32:46.426 "params": { 00:32:46.426 "small_cache_size": 128, 00:32:46.426 "large_cache_size": 16, 00:32:46.426 "task_count": 2048, 00:32:46.426 "sequence_count": 2048, 00:32:46.426 "buf_count": 2048 00:32:46.426 } 00:32:46.426 } 00:32:46.426 ] 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "subsystem": "bdev", 00:32:46.426 "config": [ 00:32:46.426 { 00:32:46.426 "method": "bdev_set_options", 00:32:46.426 "params": { 00:32:46.426 "bdev_io_pool_size": 65535, 00:32:46.426 "bdev_io_cache_size": 256, 00:32:46.426 "bdev_auto_examine": true, 00:32:46.426 "iobuf_small_cache_size": 128, 00:32:46.426 "iobuf_large_cache_size": 16 00:32:46.426 } 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "method": "bdev_raid_set_options", 00:32:46.426 "params": { 00:32:46.426 "process_window_size_kb": 1024, 00:32:46.426 "process_max_bandwidth_mb_sec": 0 00:32:46.426 } 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "method": "bdev_iscsi_set_options", 00:32:46.426 "params": { 00:32:46.426 "timeout_sec": 30 00:32:46.426 } 00:32:46.426 }, 00:32:46.426 { 00:32:46.426 "method": "bdev_nvme_set_options", 00:32:46.426 "params": { 00:32:46.426 "action_on_timeout": "none", 00:32:46.426 "timeout_us": 0, 00:32:46.426 "timeout_admin_us": 0, 00:32:46.426 "keep_alive_timeout_ms": 10000, 00:32:46.426 "arbitration_burst": 0, 00:32:46.426 "low_priority_weight": 0, 00:32:46.426 "medium_priority_weight": 0, 00:32:46.426 "high_priority_weight": 0, 00:32:46.426 "nvme_adminq_poll_period_us": 10000, 00:32:46.426 "nvme_ioq_poll_period_us": 0, 00:32:46.426 "io_queue_requests": 512, 00:32:46.426 "delay_cmd_submit": true, 00:32:46.426 "transport_retry_count": 4, 00:32:46.426 "bdev_retry_count": 3, 00:32:46.426 "transport_ack_timeout": 0, 00:32:46.426 "ctrlr_loss_timeout_sec": 0, 00:32:46.426 "reconnect_delay_sec": 0, 00:32:46.426 "fast_io_fail_timeout_sec": 0, 00:32:46.426 "disable_auto_failback": false, 00:32:46.426 "generate_uuids": false, 00:32:46.426 "transport_tos": 0, 00:32:46.426 "nvme_error_stat": false, 00:32:46.426 "rdma_srq_size": 0, 00:32:46.426 "io_path_stat": false, 00:32:46.426 "allow_accel_sequence": false, 00:32:46.426 "rdma_max_cq_size": 0, 00:32:46.426 "rdma_cm_event_timeout_ms": 0, 00:32:46.427 "dhchap_digests": [ 00:32:46.427 "sha256", 00:32:46.427 "sha384", 00:32:46.427 "sha512" 00:32:46.427 ], 00:32:46.427 "dhchap_dhgroups": [ 00:32:46.427 "null", 00:32:46.427 "ffdhe2048", 00:32:46.427 "ffdhe3072", 00:32:46.427 "ffdhe4096", 00:32:46.427 "ffdhe6144", 00:32:46.427 "ffdhe8192" 00:32:46.427 ] 00:32:46.427 } 00:32:46.427 }, 00:32:46.427 { 00:32:46.427 "method": "bdev_nvme_attach_controller", 00:32:46.427 "params": { 00:32:46.427 "name": "nvme0", 00:32:46.427 "trtype": "TCP", 00:32:46.427 "adrfam": "IPv4", 00:32:46.427 "traddr": "127.0.0.1", 00:32:46.427 "trsvcid": "4420", 00:32:46.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.427 "prchk_reftag": false, 00:32:46.427 "prchk_guard": false, 00:32:46.427 "ctrlr_loss_timeout_sec": 0, 00:32:46.427 "reconnect_delay_sec": 0, 00:32:46.427 "fast_io_fail_timeout_sec": 0, 00:32:46.427 "psk": "key0", 00:32:46.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.427 "hdgst": false, 00:32:46.427 "ddgst": false, 00:32:46.427 "multipath": "multipath" 00:32:46.427 } 00:32:46.427 }, 00:32:46.427 { 00:32:46.427 "method": "bdev_nvme_set_hotplug", 00:32:46.427 "params": { 00:32:46.427 "period_us": 100000, 00:32:46.427 "enable": false 00:32:46.427 } 00:32:46.427 }, 00:32:46.427 { 00:32:46.427 "method": "bdev_wait_for_examine" 00:32:46.427 } 00:32:46.427 ] 00:32:46.427 }, 00:32:46.427 { 00:32:46.427 "subsystem": "nbd", 00:32:46.427 "config": [] 00:32:46.427 } 00:32:46.427 ] 00:32:46.427 }' 00:32:46.427 07:34:49 keyring_file -- keyring/file.sh@115 -- # killprocess 2692637 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2692637 ']' 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2692637 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@957 -- # uname 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2692637 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2692637' 00:32:46.427 killing process with pid 2692637 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@971 -- # kill 2692637 00:32:46.427 Received shutdown signal, test time was about 1.000000 seconds 00:32:46.427 00:32:46.427 Latency(us) 00:32:46.427 [2024-11-20T06:34:49.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.427 [2024-11-20T06:34:49.860Z] =================================================================================================================== 00:32:46.427 [2024-11-20T06:34:49.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.427 07:34:49 keyring_file -- common/autotest_common.sh@976 -- # wait 2692637 00:32:46.685 07:34:49 keyring_file -- keyring/file.sh@118 -- # bperfpid=2694115 00:32:46.686 07:34:49 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2694115 /var/tmp/bperf.sock 00:32:46.686 07:34:49 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2694115 ']' 00:32:46.686 07:34:49 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.686 07:34:49 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:46.686 07:34:49 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:46.686 07:34:49 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.686 07:34:49 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:46.686 "subsystems": [ 00:32:46.686 { 00:32:46.686 "subsystem": "keyring", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "keyring_file_add_key", 00:32:46.686 "params": { 00:32:46.686 "name": "key0", 00:32:46.686 "path": "/tmp/tmp.CTCBxI3fzx" 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "keyring_file_add_key", 00:32:46.686 "params": { 00:32:46.686 "name": "key1", 00:32:46.686 "path": "/tmp/tmp.q9ZvTZrlhE" 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "iobuf", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "iobuf_set_options", 00:32:46.686 "params": { 00:32:46.686 "small_pool_count": 8192, 00:32:46.686 "large_pool_count": 1024, 00:32:46.686 "small_bufsize": 8192, 00:32:46.686 "large_bufsize": 135168, 00:32:46.686 "enable_numa": false 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "sock", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "sock_set_default_impl", 00:32:46.686 "params": { 00:32:46.686 "impl_name": "posix" 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "sock_impl_set_options", 00:32:46.686 "params": { 00:32:46.686 "impl_name": "ssl", 00:32:46.686 "recv_buf_size": 4096, 00:32:46.686 "send_buf_size": 4096, 00:32:46.686 "enable_recv_pipe": true, 00:32:46.686 "enable_quickack": false, 00:32:46.686 "enable_placement_id": 0, 00:32:46.686 "enable_zerocopy_send_server": true, 00:32:46.686 "enable_zerocopy_send_client": false, 00:32:46.686 "zerocopy_threshold": 0, 00:32:46.686 "tls_version": 0, 00:32:46.686 "enable_ktls": false 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "sock_impl_set_options", 00:32:46.686 "params": { 00:32:46.686 "impl_name": "posix", 00:32:46.686 "recv_buf_size": 2097152, 00:32:46.686 "send_buf_size": 2097152, 00:32:46.686 "enable_recv_pipe": true, 00:32:46.686 "enable_quickack": false, 00:32:46.686 "enable_placement_id": 0, 00:32:46.686 "enable_zerocopy_send_server": true, 00:32:46.686 "enable_zerocopy_send_client": false, 00:32:46.686 "zerocopy_threshold": 0, 00:32:46.686 "tls_version": 0, 00:32:46.686 "enable_ktls": false 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "vmd", 00:32:46.686 "config": [] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "accel", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "accel_set_options", 00:32:46.686 "params": { 00:32:46.686 "small_cache_size": 128, 00:32:46.686 "large_cache_size": 16, 00:32:46.686 "task_count": 2048, 00:32:46.686 "sequence_count": 2048, 00:32:46.686 "buf_count": 2048 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "bdev", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "bdev_set_options", 00:32:46.686 "params": { 00:32:46.686 "bdev_io_pool_size": 65535, 00:32:46.686 "bdev_io_cache_size": 256, 00:32:46.686 "bdev_auto_examine": true, 00:32:46.686 "iobuf_small_cache_size": 128, 00:32:46.686 "iobuf_large_cache_size": 16 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_raid_set_options", 00:32:46.686 "params": { 00:32:46.686 "process_window_size_kb": 1024, 00:32:46.686 "process_max_bandwidth_mb_sec": 0 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_iscsi_set_options", 00:32:46.686 "params": { 00:32:46.686 "timeout_sec": 30 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_nvme_set_options", 00:32:46.686 "params": { 00:32:46.686 "action_on_timeout": "none", 00:32:46.686 "timeout_us": 0, 00:32:46.686 "timeout_admin_us": 0, 00:32:46.686 "keep_alive_timeout_ms": 10000, 00:32:46.686 "arbitration_burst": 0, 00:32:46.686 "low_priority_weight": 0, 00:32:46.686 "medium_priority_weight": 0, 00:32:46.686 "high_priority_weight": 0, 00:32:46.686 "nvme_adminq_poll_period_us": 10000, 00:32:46.686 "nvme_ioq_poll_period_us": 0, 00:32:46.686 "io_queue_requests": 512, 00:32:46.686 "delay_cmd_submit": true, 00:32:46.686 "transport_retry_count": 4, 00:32:46.686 "bdev_retry_count": 3, 00:32:46.686 "transport_ack_timeout": 0, 00:32:46.686 "ctrlr_loss_timeout_sec": 0, 00:32:46.686 "reconnect_delay_sec": 0, 00:32:46.686 "fast_io_fail_timeout_sec": 0, 00:32:46.686 "disable_auto_failback": false, 00:32:46.686 "generate_uuids": false, 00:32:46.686 "transport_tos": 0, 00:32:46.686 "nvme_error_stat": false, 00:32:46.686 "rdma_srq_size": 0, 00:32:46.686 "io_path_stat": false, 00:32:46.686 "allow_accel_sequence": false, 00:32:46.686 "rdma_max_cq_size": 0, 00:32:46.686 "rdma_cm_event_timeout_ms": 0, 00:32:46.686 "dhchap_digests": [ 00:32:46.686 "sha256", 00:32:46.686 "sha384", 00:32:46.686 "sha512" 00:32:46.686 ], 00:32:46.686 "dhchap_dhgroups": [ 00:32:46.686 "null", 00:32:46.686 "ffdhe2048", 00:32:46.686 "ffdhe3072", 00:32:46.686 "ffdhe4096", 00:32:46.686 "ffdhe6144", 00:32:46.686 "ffdhe8192" 00:32:46.686 ] 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_nvme_attach_controller", 00:32:46.686 "params": { 00:32:46.686 "name": "nvme0", 00:32:46.686 "trtype": "TCP", 00:32:46.686 "adrfam": "IPv4", 00:32:46.686 "traddr": "127.0.0.1", 00:32:46.686 "trsvcid": "4420", 00:32:46.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.686 "prchk_reftag": false, 00:32:46.686 "prchk_guard": false, 00:32:46.686 "ctrlr_loss_timeout_sec": 0, 00:32:46.686 "reconnect_delay_sec": 0, 00:32:46.686 "fast_io_fail_timeout_sec": 0, 00:32:46.686 "psk": "key0", 00:32:46.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.686 "hdgst": false, 00:32:46.686 "ddgst": false, 00:32:46.686 "multipath": "multipath" 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_nvme_set_hotplug", 00:32:46.686 "params": { 00:32:46.686 "period_us": 100000, 00:32:46.686 "enable": false 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_wait_for_examine" 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "nbd", 00:32:46.686 "config": [] 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }' 00:32:46.686 07:34:49 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:46.686 07:34:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:46.686 [2024-11-20 07:34:49.986843] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:32:46.686 [2024-11-20 07:34:49.986927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694115 ] 00:32:46.686 [2024-11-20 07:34:50.059713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.944 [2024-11-20 07:34:50.118867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.944 [2024-11-20 07:34:50.307570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:47.202 07:34:50 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:47.202 07:34:50 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:32:47.202 07:34:50 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:47.202 07:34:50 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:47.202 07:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.460 07:34:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:47.460 07:34:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:47.460 07:34:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:47.460 07:34:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.461 07:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.461 07:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.461 07:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:47.719 07:34:50 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:47.719 07:34:50 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:47.719 07:34:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:47.719 07:34:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.719 07:34:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.719 07:34:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:47.719 07:34:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.977 07:34:51 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:47.977 07:34:51 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:47.977 07:34:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:47.977 07:34:51 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:48.235 07:34:51 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:48.235 07:34:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:48.235 07:34:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CTCBxI3fzx /tmp/tmp.q9ZvTZrlhE 00:32:48.235 07:34:51 keyring_file -- keyring/file.sh@20 -- # killprocess 2694115 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2694115 ']' 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2694115 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@957 -- # uname 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2694115 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2694115' 00:32:48.235 killing process with pid 2694115 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@971 -- # kill 2694115 00:32:48.235 Received shutdown signal, test time was about 1.000000 seconds 00:32:48.235 00:32:48.235 Latency(us) 00:32:48.235 [2024-11-20T06:34:51.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.235 [2024-11-20T06:34:51.668Z] =================================================================================================================== 00:32:48.235 [2024-11-20T06:34:51.668Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:48.235 07:34:51 keyring_file -- common/autotest_common.sh@976 -- # wait 2694115 00:32:48.493 07:34:51 keyring_file -- keyring/file.sh@21 -- # killprocess 2692623 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2692623 ']' 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2692623 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@957 -- # uname 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2692623 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2692623' 00:32:48.493 killing process with pid 2692623 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@971 -- # kill 2692623 00:32:48.493 07:34:51 keyring_file -- common/autotest_common.sh@976 -- # wait 2692623 00:32:49.060 00:32:49.060 real 0m14.657s 00:32:49.060 user 0m37.192s 00:32:49.060 sys 0m3.326s 00:32:49.060 07:34:52 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:49.060 07:34:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:49.060 ************************************ 00:32:49.060 END TEST keyring_file 00:32:49.060 ************************************ 00:32:49.060 07:34:52 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:32:49.060 07:34:52 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:49.060 07:34:52 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:49.060 07:34:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:49.060 07:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:49.060 ************************************ 00:32:49.060 START TEST keyring_linux 00:32:49.060 ************************************ 00:32:49.060 07:34:52 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:49.060 Joined session keyring: 98224141 00:32:49.060 * Looking for test storage... 00:32:49.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:49.060 07:34:52 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:49.060 07:34:52 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:32:49.060 07:34:52 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:49.060 07:34:52 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.060 07:34:52 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.319 07:34:52 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:49.319 07:34:52 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.319 07:34:52 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.319 --rc genhtml_branch_coverage=1 00:32:49.319 --rc genhtml_function_coverage=1 00:32:49.319 --rc genhtml_legend=1 00:32:49.319 --rc geninfo_all_blocks=1 00:32:49.319 --rc geninfo_unexecuted_blocks=1 00:32:49.319 00:32:49.319 ' 00:32:49.319 07:34:52 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.319 --rc genhtml_branch_coverage=1 00:32:49.319 --rc genhtml_function_coverage=1 00:32:49.319 --rc genhtml_legend=1 00:32:49.319 --rc geninfo_all_blocks=1 00:32:49.319 --rc geninfo_unexecuted_blocks=1 00:32:49.319 00:32:49.319 ' 00:32:49.319 07:34:52 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:49.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.319 --rc genhtml_branch_coverage=1 00:32:49.319 --rc genhtml_function_coverage=1 00:32:49.319 --rc genhtml_legend=1 00:32:49.319 --rc geninfo_all_blocks=1 00:32:49.319 --rc geninfo_unexecuted_blocks=1 00:32:49.319 00:32:49.319 ' 00:32:49.319 07:34:52 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.320 --rc genhtml_branch_coverage=1 00:32:49.320 --rc genhtml_function_coverage=1 00:32:49.320 --rc genhtml_legend=1 00:32:49.320 --rc geninfo_all_blocks=1 00:32:49.320 --rc geninfo_unexecuted_blocks=1 00:32:49.320 00:32:49.320 ' 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.320 07:34:52 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.320 07:34:52 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.320 07:34:52 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.320 07:34:52 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.320 07:34:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.320 07:34:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.320 07:34:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.320 07:34:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:49.320 07:34:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:49.320 /tmp/:spdk-test:key0 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:49.320 07:34:52 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:49.320 07:34:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:49.320 /tmp/:spdk-test:key1 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2694597 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:49.320 07:34:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2694597 00:32:49.320 07:34:52 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2694597 ']' 00:32:49.320 07:34:52 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.320 07:34:52 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:49.320 07:34:52 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.320 07:34:52 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:49.320 07:34:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.320 [2024-11-20 07:34:52.645735] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:32:49.320 [2024-11-20 07:34:52.645824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694597 ] 00:32:49.320 [2024-11-20 07:34:52.710947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.579 [2024-11-20 07:34:52.769791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:32:49.837 07:34:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.837 [2024-11-20 07:34:53.047422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.837 null0 00:32:49.837 [2024-11-20 07:34:53.079487] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.837 [2024-11-20 07:34:53.080016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.837 07:34:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:49.837 544045379 00:32:49.837 07:34:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:49.837 66494697 00:32:49.837 07:34:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2694609 00:32:49.837 07:34:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2694609 /var/tmp/bperf.sock 00:32:49.837 07:34:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2694609 ']' 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:49.837 07:34:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.837 [2024-11-20 07:34:53.149506] Starting SPDK v25.01-pre git sha1 5716007f5 / DPDK 24.03.0 initialization... 00:32:49.837 [2024-11-20 07:34:53.149592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694609 ] 00:32:49.837 [2024-11-20 07:34:53.215060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.095 [2024-11-20 07:34:53.274706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.095 07:34:53 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:50.095 07:34:53 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:32:50.095 07:34:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:50.095 07:34:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:50.353 07:34:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:50.353 07:34:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:50.611 07:34:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:50.612 07:34:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:50.870 [2024-11-20 07:34:54.247605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.129 nvme0n1 00:32:51.129 07:34:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:51.129 07:34:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:51.129 07:34:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:51.129 07:34:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:51.129 07:34:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:51.129 07:34:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.387 07:34:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:51.387 07:34:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:51.387 07:34:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:51.387 07:34:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:51.387 07:34:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.387 07:34:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:51.387 07:34:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@25 -- # sn=544045379 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 544045379 == \5\4\4\0\4\5\3\7\9 ]] 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 544045379 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:51.645 07:34:54 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.645 Running I/O for 1 seconds... 00:32:52.836 11023.00 IOPS, 43.06 MiB/s 00:32:52.836 Latency(us) 00:32:52.836 [2024-11-20T06:34:56.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.836 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:52.836 nvme0n1 : 1.01 11030.83 43.09 0.00 0.00 11532.66 8738.13 20194.80 00:32:52.836 [2024-11-20T06:34:56.269Z] =================================================================================================================== 00:32:52.836 [2024-11-20T06:34:56.269Z] Total : 11030.83 43.09 0.00 0.00 11532.66 8738.13 20194.80 00:32:52.836 { 00:32:52.836 "results": [ 00:32:52.836 { 00:32:52.836 "job": "nvme0n1", 00:32:52.836 "core_mask": "0x2", 00:32:52.836 "workload": "randread", 00:32:52.836 "status": "finished", 00:32:52.836 "queue_depth": 128, 00:32:52.836 "io_size": 4096, 00:32:52.836 "runtime": 1.010985, 00:32:52.836 "iops": 11030.826372300282, 00:32:52.836 "mibps": 43.089165516797976, 00:32:52.836 "io_failed": 0, 00:32:52.836 "io_timeout": 0, 00:32:52.836 "avg_latency_us": 11532.661382645198, 00:32:52.836 "min_latency_us": 8738.133333333333, 00:32:52.836 "max_latency_us": 20194.79703703704 00:32:52.836 } 00:32:52.836 ], 00:32:52.836 "core_count": 1 00:32:52.836 } 00:32:52.836 07:34:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:52.836 07:34:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.094 07:34:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:53.094 07:34:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:53.094 07:34:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:53.095 07:34:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:53.095 07:34:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.095 07:34:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:53.353 07:34:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:53.353 07:34:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:53.353 07:34:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:53.353 07:34:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:53.353 07:34:56 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.353 07:34:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.611 [2024-11-20 07:34:56.833548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-11-20 07:34:56.833549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182be0 (107)ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.611 : Transport endpoint is not connected 00:32:53.611 [2024-11-20 07:34:56.834542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2182be0 (9): Bad file descriptor 00:32:53.611 [2024-11-20 07:34:56.835541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:53.611 [2024-11-20 07:34:56.835561] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:53.611 [2024-11-20 07:34:56.835575] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:53.611 [2024-11-20 07:34:56.835590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:53.611 request: 00:32:53.611 { 00:32:53.611 "name": "nvme0", 00:32:53.611 "trtype": "tcp", 00:32:53.611 "traddr": "127.0.0.1", 00:32:53.611 "adrfam": "ipv4", 00:32:53.611 "trsvcid": "4420", 00:32:53.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.611 "prchk_reftag": false, 00:32:53.611 "prchk_guard": false, 00:32:53.611 "hdgst": false, 00:32:53.611 "ddgst": false, 00:32:53.611 "psk": ":spdk-test:key1", 00:32:53.611 "allow_unrecognized_csi": false, 00:32:53.611 "method": "bdev_nvme_attach_controller", 00:32:53.611 "req_id": 1 00:32:53.611 } 00:32:53.611 Got JSON-RPC error response 00:32:53.611 response: 00:32:53.611 { 00:32:53.611 "code": -5, 00:32:53.611 "message": "Input/output error" 00:32:53.611 } 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@33 -- # sn=544045379 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 544045379 00:32:53.611 1 links removed 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@33 -- # sn=66494697 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 66494697 00:32:53.611 1 links removed 00:32:53.611 07:34:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2694609 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2694609 ']' 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2694609 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2694609 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2694609' 00:32:53.611 killing process with pid 2694609 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@971 -- # kill 2694609 00:32:53.611 Received shutdown signal, test time was about 1.000000 seconds 00:32:53.611 00:32:53.611 Latency(us) 00:32:53.611 [2024-11-20T06:34:57.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.611 [2024-11-20T06:34:57.044Z] =================================================================================================================== 00:32:53.611 [2024-11-20T06:34:57.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.611 07:34:56 keyring_linux -- common/autotest_common.sh@976 -- # wait 2694609 00:32:53.870 07:34:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2694597 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2694597 ']' 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2694597 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2694597 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2694597' 00:32:53.870 killing process with pid 2694597 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@971 -- # kill 2694597 00:32:53.870 07:34:57 keyring_linux -- common/autotest_common.sh@976 -- # wait 2694597 00:32:54.435 00:32:54.435 real 0m5.260s 00:32:54.435 user 0m10.372s 00:32:54.435 sys 0m1.624s 00:32:54.435 07:34:57 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:54.435 07:34:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:54.435 ************************************ 00:32:54.435 END TEST keyring_linux 00:32:54.435 ************************************ 00:32:54.435 07:34:57 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:54.435 07:34:57 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:54.435 07:34:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:54.435 07:34:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:54.435 07:34:57 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:54.435 07:34:57 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:54.435 07:34:57 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:54.435 07:34:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:54.435 07:34:57 -- common/autotest_common.sh@10 -- # set +x 00:32:54.435 07:34:57 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:54.435 07:34:57 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:32:54.435 07:34:57 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:32:54.435 07:34:57 -- common/autotest_common.sh@10 -- # set +x 00:32:56.335 INFO: APP EXITING 00:32:56.335 INFO: killing all VMs 00:32:56.335 INFO: killing vhost app 00:32:56.335 INFO: EXIT DONE 00:32:57.710 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:57.710 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:57.710 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:57.710 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:57.710 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:57.710 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:57.710 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:57.710 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:57.710 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:32:57.710 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:57.710 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:57.710 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:57.710 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:57.710 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:57.710 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:57.710 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:57.710 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:59.140 Cleaning 00:32:59.140 Removing: /var/run/dpdk/spdk0/config 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:59.140 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:59.140 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:59.140 Removing: /var/run/dpdk/spdk1/config 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:59.140 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:59.140 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:59.140 Removing: /var/run/dpdk/spdk2/config 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:59.140 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:59.140 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:59.140 Removing: /var/run/dpdk/spdk3/config 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:59.140 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:59.140 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:59.140 Removing: /var/run/dpdk/spdk4/config 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:59.140 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:59.140 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:59.140 Removing: /dev/shm/bdev_svc_trace.1 00:32:59.140 Removing: /dev/shm/nvmf_trace.0 00:32:59.140 Removing: /dev/shm/spdk_tgt_trace.pid2373367 00:32:59.140 Removing: /var/run/dpdk/spdk0 00:32:59.140 Removing: /var/run/dpdk/spdk1 00:32:59.140 Removing: /var/run/dpdk/spdk2 00:32:59.140 Removing: /var/run/dpdk/spdk3 00:32:59.140 Removing: /var/run/dpdk/spdk4 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2371701 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2372438 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2373367 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2373710 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2374495 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2374656 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2375731 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2375910 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2376265 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2377468 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2378390 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2378706 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2378901 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2379208 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2379434 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2379592 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2379753 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2379943 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2380254 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2382739 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2382907 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2383069 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2383077 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2383506 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2383514 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2383940 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2383949 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2384118 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2384249 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2384411 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2384421 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2384920 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2385075 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2385284 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2387516 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2390161 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2397152 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2397568 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2400081 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2400247 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2402884 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2406618 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2409423 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2415862 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2421104 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2422431 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2423094 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2433359 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2435771 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2463407 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2466710 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2470553 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2474831 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2474833 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2475487 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2476143 00:32:59.140 Removing: /var/run/dpdk/spdk_pid2476699 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2477223 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2477230 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2477371 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2477509 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2477511 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2478166 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2478826 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2479411 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2479879 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2480006 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2480143 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2481544 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2482390 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2487606 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2515752 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2518684 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2519785 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2521085 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2521225 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2521365 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2521507 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2522072 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2523391 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2524123 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2524554 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2526173 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2526474 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2527035 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2529423 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2533336 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2533337 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2533338 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2535556 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2540372 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2543045 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2546755 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2547681 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2548731 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2549821 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2552589 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2555122 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2557414 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2561654 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2561741 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2564561 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2564697 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2564847 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2565214 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2565228 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2567996 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2568339 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2571007 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2573607 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2577036 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2580522 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2587013 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2591498 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2591504 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2603997 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2604504 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2604933 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2605432 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2606038 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2606947 00:32:59.400 Removing: /var/run/dpdk/spdk_pid2607473 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2607883 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2610390 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2610539 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2614342 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2614509 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2617871 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2620480 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2627370 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2627813 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2630202 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2630474 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2633100 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2636813 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2639068 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2645969 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2651177 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2652356 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2653018 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2663202 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2665461 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2667459 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2672380 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2672463 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2675408 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2676921 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2678831 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2679691 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2680983 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2681853 00:32:59.401 Removing: /var/run/dpdk/spdk_pid2687190 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2687533 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2687923 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2689479 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2689873 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2690154 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2692623 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2692637 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2694115 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2694597 00:32:59.660 Removing: /var/run/dpdk/spdk_pid2694609 00:32:59.660 Clean 00:32:59.660 07:35:02 -- common/autotest_common.sh@1451 -- # return 0 00:32:59.660 07:35:02 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:59.660 07:35:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:59.660 07:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:59.660 07:35:02 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:59.660 07:35:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:59.660 07:35:02 -- common/autotest_common.sh@10 -- # set +x 00:32:59.660 07:35:02 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:59.660 07:35:02 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:59.660 07:35:02 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:59.660 07:35:02 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:59.660 07:35:02 -- spdk/autotest.sh@394 -- # hostname 00:32:59.660 07:35:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:59.919 geninfo: WARNING: invalid characters removed from testname! 00:33:32.007 07:35:33 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:34.554 07:35:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.851 07:35:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:41.150 07:35:43 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:43.690 07:35:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:46.987 07:35:49 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:50.285 07:35:52 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:50.285 07:35:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:50.285 07:35:53 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:33:50.285 07:35:53 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:50.285 07:35:53 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:50.285 07:35:53 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:50.285 + [[ -n 2301821 ]] 00:33:50.285 + sudo kill 2301821 00:33:50.295 [Pipeline] } 00:33:50.313 [Pipeline] // stage 00:33:50.319 [Pipeline] } 00:33:50.333 [Pipeline] // timeout 00:33:50.338 [Pipeline] } 00:33:50.353 [Pipeline] // catchError 00:33:50.358 [Pipeline] } 00:33:50.373 [Pipeline] // wrap 00:33:50.379 [Pipeline] } 00:33:50.393 [Pipeline] // catchError 00:33:50.403 [Pipeline] stage 00:33:50.406 [Pipeline] { (Epilogue) 00:33:50.419 [Pipeline] catchError 00:33:50.421 [Pipeline] { 00:33:50.435 [Pipeline] echo 00:33:50.437 Cleanup processes 00:33:50.443 [Pipeline] sh 00:33:50.730 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:50.730 2705287 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:50.745 [Pipeline] sh 00:33:51.031 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:51.031 ++ grep -v 'sudo pgrep' 00:33:51.031 ++ awk '{print $1}' 00:33:51.031 + sudo kill -9 00:33:51.031 + true 00:33:51.044 [Pipeline] sh 00:33:51.329 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:01.385 [Pipeline] sh 00:34:01.672 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:01.672 Artifacts sizes are good 00:34:01.688 [Pipeline] archiveArtifacts 00:34:01.695 Archiving artifacts 00:34:01.844 [Pipeline] sh 00:34:02.127 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:02.144 [Pipeline] cleanWs 00:34:02.155 [WS-CLEANUP] Deleting project workspace... 00:34:02.155 [WS-CLEANUP] Deferred wipeout is used... 00:34:02.163 [WS-CLEANUP] done 00:34:02.165 [Pipeline] } 00:34:02.181 [Pipeline] // catchError 00:34:02.192 [Pipeline] sh 00:34:02.470 + logger -p user.info -t JENKINS-CI 00:34:02.478 [Pipeline] } 00:34:02.493 [Pipeline] // stage 00:34:02.498 [Pipeline] } 00:34:02.513 [Pipeline] // node 00:34:02.518 [Pipeline] End of Pipeline 00:34:02.560 Finished: SUCCESS